How the University Arkansas Measures Charter School Effectiveness and Return on Investment

(You’ve got to read it to believe it)

One problem with easy access to lots of data and a computer is that—in the wrong hands—it can result in some pretty ridiculous “research.”  Especially when the research is more for the purpose of promoting an agenda than advancing knowledge.  Case in point: A Good Investment: The Updated Productivity of Public charter Schools in Eight U. S. Cities, recently released by the Walton-funded School Choice Demonstration Project at the University of Arkansas.  The authors are Corey DeAngelis, an education policy analyst at the Cato Institute Center for Education Freedom; Patrick Wolf, a professor of education policy and 21stCentury Endowed Chair in School Choice at the University of Arkansas (endowed by Walton); Larry Maloney, president of Aspire Consulting (information technology); and Jay May, a senior consultant for EduAnalytics, LLC (they provide data based solutions to what ails K-12 education).  The eight cities reviewed are Atlanta, Boston, Denver, Houston, Indianapolis, New York City, San Antonio, and Washington, D.C.

This paper has been reported in newspapers around the country like the New York Postand the Washington Examiner.  The tone of the reporting is exemplified by the headline in the Anchorage Daily Planet: “Case Closed:  Charter Schools Deliver More Education.”  As if, in social science research, the case is ever closed. Never mind that the report examines only two subject areas (reading and math) in one grade (8th) and uses highly suspect methodology at that.    

The report compares charter schools in each city’s metropolitan area (not just the city districts) with the traditional public schools (TPS) in the same area on the basis of two factors:  cost-effectiveness and return on investment (ROI).    The authors conclude that, “On average, for the students in our cities, public charter schools are 40 percent more cost-effective and produce a 53 percent larger ROI than TPS.”  This is a pretty startling finding, but does it hold up under even casual scrutiny?  The answer is “no” according to Peter Green, the blogger at Curmudgucation.org.  He has posted an excellent critiqueof this report, but there are some problems he doesn’t get into that further discredit it. 

To begin with, the estimates of cost-effectiveness and ROI both depend on the definition of cost effectiveness the authors use, which is simply the average standardized test scores divided by average revenue per student:

The numerator and denominator in this formula are both beset with problems that relegate the resulting estimates of cost-effectiveness and ROI to the category of junk science.  First, the numerator consists of only 8thgrade reading and math NAEP scores.  Peter Green rightly questions whether standardized test scores in only two academic areas in only one grade really tell us everything we need to know about school quality for an entire school or district.   But let’s play devil’s advocate and say they do.  Even with this stipulation, the scores still have to provide a valid point of comparison between the two segments.  In other words, either the demographics of the test taking populations in the two segments must be identical, or the test scores must be adjusted to account for the differences.  The authors of this report do not adjust the scores for demographic differences, and they apparently believe that the two populations are identical for comparison purposes.  I say “apparently,” because they don’t raise the issue and make no effort to convince the reader that they are.  There are solid reasons for assuming they aren’t.

First, to be demographically identical, the student subgroups within each population must be proportionately the same.  We know, however, that charter schools typically enroll fewer students with disabilities (SD) and English learners (EL), which are historically low scoring subgroups.  Enrolling proportionately fewer of these students would result in higher scores for the charter schools as compared to TPS.  The authors do acknowledge that charter schools enroll fewer EL and SD students, but say “those enrollment gaps failed to explain the revenuedifferences between the public school sectors in every city except Boston (emphasis added).”  Huh?  What about test scoredifferences?  No mention of that.

But even if they did enroll the same proportion of SD and EL students, this still is no guarantee that these students in the charter schools actually took the NAEP.  Here’s what NAEP itself has to say on the subject:

Some SD and ELL students may be able to participate in NAEP, with or without accommodations. Others are excluded from NAEP assessments because they cannot participate with allowable accommodations. The percentage of SD and ELL students who are excluded from the NAEP assessments varies both from one jurisdiction to another and within a jurisdiction over time (emphasis added).  

Charter schools are jurisdictions that are independent of the districts in which they are located. They can make their own decisions about testing accommodations (if any) and which SD and EL students they exclude from the test.  It’s a stretch to believe that the decisions they make are identical to the decisions their host districts make.  And because charter schools like to point to standardized test scores as evidence of their superiority, they have an incentive to exclude low-scoring students from the NAEP.  For these reasons, we cannot accept as an article of faith—as the authors would seem to have us do—that the demographics of either the school enrollment or the test taking populations are identical or even close enough to make a valid comparison in each of the either cities.

The denominator used in this study—total revenue per student—has equally serious problems.  First of all, it uses total revenue, not just revenue applied to 8thgrade reading and math instruction.  So, it compares the total revenue of a charter school containing an 8thgrade (usually a middle school, but sometimes a K-8 elementary school) with that of city districts, which are K-12.  Since grade 9-12 secondary schools receive a higher level of per student funding than the lower grades, this alone inflates the TPS denominator relative to the charter schools.

But there’s more.  The revenue for the TPS includes funding for preschool and adult education!  The authors do this with a straight face and make no effort to try to explain how preschool and adult education revenue can possibly be related to 8thgrade test scores.  Instead, they literally state that, with this methodology, “We are able to connect funding to student outcomes,” completely ignoring the disconnect between the funding they include and the outcomes they purport to measure.

Even within a 6-8 or 7-8 middle school, different schools will spend different amounts on 8thgrade math and reading, depending on different needs and priorities.  The only valid denominator for the author’s purposes is revenue actually spent on 8thgrade math and reading instruction.  Instead of attempting to do this, the author’s go to the opposite extreme of including funding for everything, including the kitchen sink.

I don’t know what’s worse: what this report says about what passes for educational research these days or about the gullibility of too many newspapers and their education journalists.