More Money for What?

The RT-D reports a march to the Capitol “demanding more funding for public education.”  The story dwells on recent cuts in funding for schools but mentions only one specific problem to be remedied, “a giant rat in the middle of the gym floor.”

The notion of buying better schooling is easy to understand but there is precious little evidence that it works in practice.  To the contrary, we have seen that the SOL pass rates in Virginia’s school divisions are quite unrelated to the divisions’ expenditures. 

In the other direction, WashPost reports a study that found an effect of increased expenditures, suggesting that benefits were most obvious for students from poor families. 

If larger expenditures indeed benefit less affluent students, the Virginia data should give us a yardstick: VDOE posts data for students who receive free and reduced price lunches.  And the SOL pass rates provide a measure of what counts – output – not what’s easy to count – inputs. 

To start, here are the 2017 division average reading pass rates for students who are economically disadvantaged (“ED”) or are not (“Not ED”), plotted v. the division expenditure per student for the day school operation (We have to go back a year because the 2018 expenditure data are not yet up on the VDOE Web site).

image

No correlation.  Zip.  Nada.  Whatever those expensive divisions are getting for their money, it’s not performance (by this measure), either of the ED or the Not ED students.

How about math?

image

The math data might suggest that performance actually decreases with in increasing expenditures, but the R-squared values are much too small to draw any conclusions.

Of course, much of the the day school spending goes to administration, health services, transportation, and O&M, things that may or may not be related to students’ performance.  So let’s turn to salaries: Table 19 in the Superintendent’s Annual Report gives us the division average instructional salary.

The reading data give mixed results but, again, not enough correlation to draw any conclusions.

image

The math data sing the same song.

image

In sum: At least as to across-the-board salary averages, the divisions that pay more – up to over twice as much – don’t get better reading or math SOL pass rates.

Yet the Governor says he wants to raise teacher pay.  He does not tell us how that money might translate into better learning.  Of course, he is a politician and he may have something other than learning in mind.

Class size is another popular consideration (again, however, a measure of inputs, not of learning).  Table 17 gives us the number of instructional positions per thousand students; I’ve recalculated to number per hundred. 

Starting again with reading:

image

The salary range there covers a ratio of 2.75.  The performance does not correlate.

How about math?

image

Same story.

It is clear that, in terms of division averages, the per student expenditure, teacher salary, and number of teachers per student have no power to predict average SOL pass rates, either of ED or Not ED students.  In light of that, it would be helpful for the marchers and other folks who want to spend more on schools to explain exactly how the additional funds they advocate would improve student performance and, for sure, how that effect would be measured.

—————————

Note added on 12/12:

The estimable Jim Bacon points out that the large NoVa divisions might bias the data because of the high cost of living up there.

Fair enough.  Let’s repeat the analysis with those Big Guys (Alexandria, Arlington, Fairfax, Loudoun, Prince William, Spotsylvania, and Stafford) removed:

image

image

image

image

image

image

You might notice that the Big Guys disappear at the high salary end but mostly are down in the pack as to expenditure per student and positions per hundred.  In any case, the data tell the same story without the NoVa divisions as with them.

Time to Toughen the SOLs?

Longwood professor Emily Williamson argued in 2012 that “SOL tests have helped Virginia to raise its standards but now it is time to raise them again.”

Prof. Williamson used accreditation to measure improvement.  In light of the wholesale and shifting manipulation of the accreditation process (e.g., the recent change that resulted in no Virginia school being denied accreditation), accreditation is a bogus measure. 

The SOL pass rates are less problematic and they give us a chance to test the hypothesis, especially as to whether raising standards would improve learning.  Let’s look at those data.

As of 2012, the average pass rates had, for the most part, risen consistently since 2006, the start date of the VDOE database (as accessible on the Web).

image

History and Social Science dropped in 2011 in response to a revision of the tests.  The smaller (but, on the statewide scale, non-trivial) 2011 decreases in the writing, reading, and math pass rates probably were the result of the General Assembly’s crackdown on abuse of the VGLA to artificially boost pass rates. 

Overall, however, the data were consistent with the Williamson hypothesis, at least as to earlier changes.  Then, in 2012 the Board of Education (taking the same view as Williamson?) installed new, tougher math tests; 2013 saw the introduction of new, more rigorous tests in English and Science.  Along with the earlier change in the history tests, this provided a chance to test the Williamson hypothesis going forward.

image

With each new test, the statewide pass rates fell, as would be expected.  The rates then bounced back.  Some of them. 

The math recovery from 2012 to 2015 might be explained by the Williamson hypothesis (although the huge drop in 2012 suggests that the earlier high pass rates didn’t represent a lot of learning of mathematics).  Reading, writing, and science enjoyed smaller recoveries.

These data do not tell us whether those pass rate increases reflect better learning or short term improvement from teaching better aligned with the new tests.  The year’s delay in improvement of the science, writing, and reading numbers (3 years for history) imply that there was a delay in adjusting the curricula and teaching to reflect the new tests.

The pass rate droops since 2016, however, falsify the notion of any long-term, test-driven improvement.  The Board of Education ran the test that Prof. Williamson suggested; the test failed.

Hmmm.  Let’s look at some division data to see if there is more to be learned regarding the Williamson hypothesis.

First, Richmond:

image

Looking back for a moment, the data give us a picture of the magnitude of its VGLA cheating.  Here is a summary of Richmond’s pass rate gains from 2006 to 2011 and the decreases from 2010 to 2011.

image

The History & SS drop can be attributed, at least in part, to the new tests.  The other 2011 losses average 52% of the earlier gains.  After we juxtapose those losses with the earlier, spectacular increases in VGLA testing in Richmond, it is hard to escape the conclusion that something like half of those pre-2011 pass rate increases were the result of cheating, not better learning. 

Then we have the performance after the arrival of the new tests.  Aside from the math and reading increases in the two or three years after introduction of the new tests, and the little science bump in the same period, the picture here is of declining performance, not of improvement. 

Thus, even if the Williamson notion of test-driven improvement were to hold in general, it does not predict Richmond’s performance.

For a nice contrast to Richmond, there is Hanover County:

image

Notice the scale: Same length as Richmond but shifted up ten points.

Without thrashing through the details, three things are clear here: It is hard to improve on a 90+% pass rate; there is no smoking gun in the 2011 numbers; and the new tests are not driving any kind of ongoing improvement.

Lynchburg shows a big 2011 drop in Hist. & SS, a huge 2012 math decrease, and little sign that the SOLs are driving recent improvement, aside from the delayed , short term bounces in the math, reading, and science numbers.  Indeed, the bounces all came in 2015 (even history), which suggests procrastination in aligning the curricula with the new tests.

image

The peer jurisdictions hark back to the Richmond data with only writing in Hampton to indicate that SOL testing might be driving recent improvement.

image

image

image

OK.  What about the Big Guy, Fairfax?

image

There’s that pattern again: Improvement until the arrival of the new tests.  Short term improvement after.  Then stagnation or declines.

We know that economically disadvantaged (“ED”) students perform differently on the SOL than their more affluent peers (“Not ED”).  Let’s see how the new tests affected the ED and Not ED groups.

image

As expected, the ED students underperform. 

It looks like much of the 2011 drop was in the ED group.  As to performance on the new tests, nothing new here.

There’s lots of room to argue about the reason(s) but little as to the conclusion: The data prior to the new tests support the Williamson hypothesis; the data after, falsify it, at least after the first couple of years. 

Thus, it looks like we’ve already enjoyed about all the test-driven improvement we’re going to get.  Now it’s time to figure out why the improvement is turning into decay.

——————–

Note added per a comment from the estimable Carol Wolf:

It is clear that performance, as measured by the SOLs, improved after imposition of the testing.  It is less clear whether the cause of the improvement was the standards set by the SOLs or the mere fact that we were measuring performance.

The traditional measures of education have been inputs: Teachers’ credentials, salaries, facilities, budgets, etc., etc.  But none of those tells us how much the kids have learned.  Although SOLs certainly are imperfect, they measure outputs and it may well be that the mere presence of the measuring instrument drove improvement.

Whatever was going on, it seems that the improvements have maxed out and perhaps even started to fade.  This is no reason to abandon the measurement of outputs.  Indeed, this is a powerful reason to refine the measurement (e.g., via the SGP that the Board of “Education” abandoned for no good reason), figure out what works to educate our children, and do more of it.

Failed But Promoted

The data tables in the Superintendent’s Annual Report mostly appear about the same time as the dandelions.  A reader, however, points out that a few tables emerge earlier.  Of interest among those early bloomers, Table 3 reports the 2017-18 fall membership by division along with the number of students NOT repeating the same grade from 2016-17. 

The difference between those two numbers gives the number who DID repeat (i.e. who had NOT been promoted).  Let’s juxtapose those data with the division failure rates (i.e., 1.0 minus the pass rate) on the SOL tests. 

First, the reading tests:

image

Overall, 22.6% of Virginia students did not pass the 2017 reading SOL tests while a mere 1.7% of the 2018 fall enrollment were students who had not been promoted.

The positive slope is consistent, for the most part, with more students held back in the divisions with lower pass rates but the least squares fit to the data shows only a modest correlation (R-squared = 14%).

Richmond is the gold square.  The peer jurisdictions, from the top, are Norfolk, Hampton, and Newport News.  Charles City is green; Lynchburg, blue.

The math data paint much the same picture.

image

Average failure rate, 22.4%.  Very slightly better correlation.

We might wonder whether the lower social promotion rates in Norfolk (more than three times as many students held back as the average) and Hampton (almost 2.5 times as many) explain their much better performances vis-a-vis Richmond.  To make that case, however, we would have to explain away Newport News.

There is plenty of room to argue about the wisdom of social promotion.  There is no room to argue with the conclusion that Virginia schools employ it, wholesale.  Indeed, given that the SOL tests “establish minimum expectations,” there is room to conclude that “wholesale” understates the reality.


Myopic Attaboy

VDOE emitted a press release this morning announcing that Alexandria and Chesterfield have been placed on the 9th Annual AP District Honor Roll.  The release quotes the Superintendent:

“Earning a place on the AP District Honor Roll reflects a school division’s commitment to equity and to encouraging all students to reach higher,” Superintendent of Public Instruction James Lane said. “I congratulate the leaders and educators of Alexandria and Chesterfield County for reaching out and identifying students with the potential for success in Advanced Placement courses, and for providing the opportunities and supports new AP students need to succeed.”

All that hoorah overlooks, especially in Alexandria, the wholesale failure of too many children who are in greatest need of some good teaching.

Here, to start, are the 2018 SOL reading pass rates by school for the Alexandria division, juxtaposed with the statewide data, plotted vs. the percentage of economically disadvantaged (“ED”) students taking the tests.

image

The blue diamonds are the average pass rates by Virginia school of students who are not economically disadvantaged (“Not ED”).  The gold triangles are the pass rates of the ED students in those schools. 

Note: Data are for schools where VDOE does not suppress pass rate data for either the ED or Not ED group.

The green circles represent the school average pass rates of Alexandria’s students who are Not ED.  The green line, fitted to those data, shows performance mostly exceeding the state average, the blue dashed line.

To the point here, the yellow circles are the school average pass rates for Alexandria’s ED students.  Note particularly the points at 36%, representing Mount Vernon Elementary where 64% of the ED students failed to pass the reading tests, and 38% at George Mason Elementary, 62% failing.  Note also the yellow fitted line that discloses the wholesale underperformance of ED students in Alexandria schools relative to the state, the gold dashed line.

The math data show a similar pattern:

image

Again, Alexandria schools produce an ED performance that barely breaks 50%, along with two schools below 40% (Mount Vernon again, at 31% this time, and William Ramsay Elementary, 36%).

Our Board of “Education” continues to prefer celebrating success, even if it has to invent that success, rather than dealing with the unacceptable performance of too many of our schools.

That said, Chesterfield paints a prettier picture, nearly mirroring the state averages, albeit with an uncomfortable collection of pass rates <60%:

image

image

And, of course, the press release is quite silent as to what the Board of “Education” is doing to improve the ~20% statwide gap between the Not ED and the ED pass rates.

Economic Disadvantage and Schools’ Reading Performance

In terms of the state and division average SOL pass rates, economically disadvantaged (“ED”) students underperform their more affluent peers (“Not ED”).

For example, here are the 2018 state averages on the reading tests.

This makes the SOL average an unfair tool for measuring academic progress because of the invisible disadvantage the averages place on the schools with larger populations of ED students.

To start a dive underneath the overall averages, here are the 2018 school average reading pass rates for ED and Not ED students, plotted vs. the percentage of ED students in the school.

The blue diamonds are the school averages for the students who are Not ED.  The fitted line comports with the visual picture:  Pass rates of the Not ED students decline as the percentage of ED students increases.

The 24% R-squared value tells us that %ED predicts 24% of the variance in the pass rates.  Of course, that does not say that the population of ED students causes the score change, just that the two variables are related, albeit at some distance.

The orange triangles are the average pass rates of the ED students in those schools.  As we would expect, those numbers mostly are lower than the Not ED rates.  The fitted line shows a more modest negative slope and the R-squared value, 9%, tells us that the correlation between the ED pass rate and the ED percentage in the school is much less robust than the Not ED correlation.

These data comport with the earlier conclusion (again without telling us the underlying factors or mechanism): At the school level, averages for ED students are generally below the averages for Not ED students.

The data also suggest something else: Pass rates of the Not ED students correlate negatively, to a modest degree, with the percentage of ED students in the school.  But the Not ED students, not so much.

This gets even more interesting if we overlay the data for the schools in a particular division.  Let’s start with the Big Guy, Fairfax.

The green dots are the Fairfax Not ED school averages; yellow, the ED.

The green fitted line for the Not ED students lies nearly atop the all schools line with nearly the same R-squared value.

However, in terms of the school averages the Fairfax ED students underperform the state average; as well, the slope is more negative (-2.7% for a 10% increase in ED population vs. -1.8% for the Not ED students).  Moreover, the R-squared values for the two groups are nearly equal and are large enough to suggest a modest correlation.

Why the FFax ED students underperform and why that underperformance increases with the ED population are questions the FFax school board really should address.  For sure, they have a problem there.

Well, that was fun, but distant.  How about Richmond?

Richmond has three schools with low ED populations; the Not Ed students in those schools have OK pass rates but the ED students are a mixed bag.  For the most part, both the ED and Not ED groups perform poorly in the more numerous, high-ED schools, which pulls the fitted lines down.

Indeed, a 10% increase in the ED population is associated with a -5.5% change in the Not ED pass rate and -2.9% in the ED rate.  As well, the R-squared for the Not ED students is reaching toward a robust correlation.  Said in English: On average, Richmond schools with more ED students have lower pass rates, while the pass rates for the Not ED students tend to be lowered more than those for the ED students.

The lowest %ED school, Munford (16%), has a 92% Not ED pass rate and a better than average 77% ED rate.  Richmond Alternative, at 21% ED, has a respectable 87% Not ED rate (especially “respectable” given that it is Richmond’s academic boot camp) but an awful 36% rate for its ED students.  Fox, at 22% ED, has a fine Not Ed pass rate, 95%, but a subpar 63% ED rate.

The yellow point at 48% ED, 100% pass rate, is Community High, a select school showing select school results. That yellow point sits atop a 100% green point.

The other Richmond schools whose Not ED students averaged >90% are Franklin (95%) and Cary and TJ (92% each).

The point at 89% ED shows a 29.6% ED pass rate, third worst in the state for ED students; it is the worst of our awful middle schools (and second worst overall in the state), MLK.

The four yellow points at 100% ED illustrate a minor anomaly in these data: The VDOE suppression rules blocked the head counts for the Not Ed students at Greene, Fairfield Court, Southampton, and Woodville, so (1) there are no corresponding Not ED points, thus (2) those four ED points are a few % farther to the right than they should be.  Count that as a bonus. If those points were in the right places, the fitted line would be even steeper.

These data say, quite clearly, that Richmond has a problem , especially in its (many) schools with large ED populations.  (The estimable Jim Bacon would suggest that problem, at least in part, is student behavior.)

Richmond will continue to look bad at least until it figures out what is wrong here.  On the SOL averages, they look even worse than their (awful) performance would merit because of the large ED populations.  And, to the point of real life and not sterile numbers, Richmond’s schools are failing, miserably, in their purpose of delivering an education “to enable each student to develop the skills that are necessary for success in school, preparation for life, and reaching their full potential.”  That failure is most stark in the case of the students who are already disadvantaged in economic terms.

For the record, here is the Richmond list.  The #DIV/0! and #N/A entries reflect suppressed data.

There are more insights to be had from these data.  Let’s start with the peer cities.

In Hampton, notice the relatively narrow range of ED percentages, the lower than average pass rates, and the steep fitted lines with non-trivial R-squared values.

Newport News data tell the same story but with much steeper slopes and stronger correlations.

Also Norfolk.

Whew!  That looks like a magnified version of Richmond’s ED issues.

Turning to the ‘burbs, these data rat out Hanover, which performs at the state average for its Not ED students but not so well with ED students, even at the lower ED populations.  Hanover gets good numbers on the statewide list of average pass rates, however, because of its low ED percentages.

Then we have Chesterfield, performing at average for both groups.

And Henrico, with notable correlations and major underperformance by both groups in the higher %ED schools.

Finally, Lynchburg, named for a relative of my paternal grandmother and, to the point here, a place where I have a reader.

Notice the milder correlations here.  Also the outstanding Not ED (95%) and not so outstanding ED pass rate (59%) at the high-ED school (Dearington Elementary).  Also the lowest ED pass rate, 47%, contrasting with an 83% Not ED rate (at Linkhorn Middle).

Bottom line: OK Not ED pass rates in L’Burg; not so good ED.

Next up: Math.

Economic Disadvantage v. More Teachers

We have seen that, on the 2017 data, division average SOL pass rates are not correlated with the numbers of teachers per student.  There is a flaw in that analysis, however:  Economically disadvantaged (“ED”) students score lower on the SOL, on average, than Not ED students.  Thus, divisions with larger populations of ED students tend have lowered overall pass rates.

The VDOE database can break out the pass rates for both ED students and their more affluent peers, so let’s take a more nuanced look.

To start, here are the 2017 division average reading pass rates for both ED and Not ED students graphed vs. the number of teachers per thousand students.  (The latest available teacher numbers are from 2017).

The slopes of the least squares fitted lines might suggest that more teachers in the division correlate with decreased pass rates of the Not ED students and slightly increased rates of the ED students.  But the R-squared values tell us that the pass rates in both datasets are essentially uncorrelated with the teacher/student ratios.

In short, these data reach the same result as the overall pass rate data: Divisions with more teachers per student do not, on average, have better pass rates.

The largest exception to that generality, out there with 189 teachers per thousand and a 97.1% Not ED pass rate (and a much better than average ED rate), is Highland County.  The ED superstar is West Point with 135 teachers per thousand and an ED pass rate of 85.2, followed by Wise (115, 84.4) and Bath (159, 82.7).

To single out some other divisions: Richmond is the yellow squares.  The peer cities are the red triangles, from the left Newport News, Hampton, and Norfolk.  Charles City is green; Lynchburg, blue.

Just looking at the graph, Richmond’s ED rate is farther from the fitted line than its Non ED.  Indeed, Excel tells us that the Richmond Not ED average is 11.8 points below the all divisions average.  That is, Richmond’s Not ED students passed at a rate 11.8% lower than the state average for Not Ed students.  The Richmond ED rate is 17.1 points below the state average for ED students.  That is, Richmond’s Not ED performance is poor; it’s ED performance is half again worse.

Aside from the question why their ED scores are so low, these data falsify Richmond’s frequent excuse that it must deal with a large ED population: Richmond does a lousy job with its Not ED students, and an even worse one with the ED students.  Whatever the cause of Richmond’s awful SOL performance, it infects the entire student population.

Next up, writing:


Pretty much the same story there as the reading data (but notice how Highland reverts back toward the mean).  Richmond’s ED rate is lowest in the state.

The graphs for the other three subjects are chapters in the same book:

excuse

There is one interesting trend here: Richmond’s ED underperformance, relative to the Not ED students, is much smaller in science and in history/SS than in English, and is somewhat less in math.  To quantify that, here are the Richmond differences from the division means for each group in each subject:

These data do not separate out any of the factors that can affect student performance, other than (roughly grouped) economic status; they do comport with the notion that Richmond has a lot of work to do, especially with its ED students.

To the bottom line: These data are consistent with the conclusion in the recent study that “the evidence suggests at best a small effect [of class size] on reading achievement. There is a negative, but statistically insignificant, effect on mathematics.”

So, when our School Board again says it needs more money, ask them what for.  And if it’s for more teachers (indeed, whatever it’s for), ask them to prove that the money will improve learning.

Larger Classes, Same Performance

The estimable Jim Bacon has posted on the class size/student performance issue.  He used a graph I produced showing division average math pass rates vs. the number of teachers per thousand students.

Oops!  I used 2016 pass rates and 2017 teacher/student ratios.  Using the 2017 pass rates changes the graph slightly but does not modify the conclusions.

Here are the correct data for math and the other four subjects as well.  Pass rates are from VDOE’s very nice (but very slow) database; teacher data are from the Superintendent’s annual report.

First, reading:

The negative slope might suggest that increasing the number of teachers is associated with lowered SOL pass rates but the R-squared value tells us that the two variables are essentially uncorrelated.

Richmond is the gold square on the graph.  The red diamonds are the peer cities, from the left Newport News, Hampton, and Norfolk.  Charles City is green; Lynchburg, blue.

The Big Hitter up there is Falls Church, with a 92.6% pass rate at 119.5 teachers/thousand.  Next in line are West Point (135.0, 91.0) and  Poquoson (105.1, 90.7).

The data for the other four subjects tell much the same story.

More on “Accreditation” in Richmond

Under the new accreditation regulation, no Richmond school was denied accreditation this year.  No matter how badly a school has failed its students, the principal now can say his/her school is “accredited” (albeit 24 were accredited “with conditions”).

Three of the most egregious examples of that are here.  I’ve redone the graphs to present the entire list of schools that have acquired accreditation by fiat.  (Data: last year, this year.)

Note: Data here are for year tested; accreditation year is one year later.  Thus, the 2018 data in the graph are from the 2017-18 testing and establish (or would if they meant anything) accreditation status for 2018-19.

image

image

image

image

image

image

image

Note: The elementary schools don’t test for writing.

image

image

image

image

image

image

image

image

image

image

image

Then we have the schools that were “Accredited with Warning” last year but were saved from even that by the new system.

image

image

image

image

image

image

So much for “this new approach to accountability.”

Let’s Squander More Money!

The Times-Dispatch reports that “Richmond’s top officials spent their quarterly meeting again calling for more money for the city’s school system.”  This abiding demand for more money ignores the more important question:  What was RPS doing with the $335,290,809 it already had.

VDOE has some data on that.

The latest expenditure data are from 2017. (We are three months beyond the end of the 2018 session but VDOE won’t have the 2018 data until about the time we start seeing dandelions.)

These data show division expenditures for operations divided by the end-of-year average daily membership.  A  footnote to the spreadsheet tells us that “[o]perations include regular day school, school food services, summer school, adult education, pre-kindergarten, and other education, but do not include non-regular day school programs, non-local education agency … programs, debt service, or capital outlay additions.”

By that measure, Richmond is the 17th most expensive division per student:

image

Richmond is the yellow bar.  The peer jurisdictions – Norfolk, Newport News, and Hampton – are the red bars.

We are spending $1,396 per kid more than the state average and $1,881 more than Norfolk.  In terms of SOL scores, we get very little return for all that money.

image

Richmond is the gold square; the red diamonds are the peer jurisdictions, from the top Hampton, Norfolk, and Newport News.  As a courtesy to my reader(s) there, the green circle is Lynchburg and the green diamond is Charles City.

The R-squared value for the least squares fitted line tells us that the reading pass rate and the per pupil expenditure are not correlated.

The math data tell the same story.

image

“But wait!” you say.  We know that the SOL is not a fair measure because “economically disadvantaged” students do not score as well as their more affluent peers.

Indeed.  The Board of Education had a better measure, the Student Growth Percentile, that measured learning and did not correlate with poverty.  They abandoned it, however, because it measured too well: It told us how well each teacher performed.

Denied the best data, all we can do is use the SOL numbers and a little algebra to offset the average effect of poverty. 

On the reading tests in ‘17, the division percentage of “disadvantaged” students predicted about 35% of the variance of the division average SOL scores.

image

(You’ll notice that Richmond grossly underperformed even the fitted line.)

Let’s be generous, and calculate an adjusted pass rate as if the correlation were 100% (“Generous” for sure: Notice that some of the pass rates get pushed over 100%.)

image

The adjustment for Richmond’s 64.2% poverty rate boosts its rate nicely but only boosts the ranking to fourth from worst, up from second.

And, boosted or not, our scores are low and pricey:

image

The correction for math is slightly different (28.025% of the poverty rate v. 28.671% for reading) and the outcome is slightly worse (Richmond is third worst).

image

In short, poverty does not come close to explaining the awful performance of the Richmond Public Schools.

It would be good if our Leaders were to stop whining about wanting more money and start explaining why they get such lousy results with the very large amount of money they already are spending.