Since the Louisiana Department of Education finally released their School Performance Scores (SPS) yesterday, I’m inspired to pass along some thoughts on how screwed-up the methodology is. At various points I assign letter grades to various component indices. In doing so, I am using the same scales as the composite SPS (below 50 F, 50-69.9 D, 70-84.9 C, 85-99.9 B, 100-150 A).
1) Timing: The scores released October 24, 2013, depend on scores of tests taken primarily before the end of the school year in May, perhaps with some summer retests thrown in. The graduation data are one year older. How incompetent must the Department of Education be to not be able to release data before school started again this fall? I’m stunned that legislators and citizens tolerate the lag time between actions and so-called accountability.
2) Missing schools: 1315 schools in Louisiana had SPS scores and grades assigned, and all but 5 of those used the same site code and school name as reported on the February, 2013, enrollment count. The five exceptions are in Lincoln Parish, where the Howard School (which appears to be at the Methodist Children’s Home) and four University-affilitated lab schools (three at Grambling and one at Louisiana Tech, if I’m not mistaken) were removed from Lincoln Parish’s district grouping and placed in their own groups. That’s odd, but even more disturbing is that 79 schools were not given a school performance score. Many of those are low in enrollment or alternative schools; others have closed. However, they all had students. One, East St. John High School (site code 048001) had 1306 students. Yet there’s no school grade reported. See https://sites.google.com/site/drjamescfinney/home/files/ for a spreadsheet showing the missing schools.
3) FERPA nonsense: In their omnipresent effort to keep useful data away from the public, the folks at the Department of Education routinely use the federal government as an excuse to obfuscate: “The Louisiana Department of Education has modified and/or suppressed data reported to protect the privacy of students in compliance with the Family Educational Rights and Privacy Act (FERPA) codified at 20 U.S.C. 1232g. The strategies used to protect privacy vary and may include rounding or other techniques but do not substantially affect the general usefulness of the data. . .” Baloney! It’s hard to do any kind of mathematical operation on the number “>145” because it’s not a number. Is anyone’s privacy really being protected by not letting us know if the credit accumulation/dropout index is exactly 150 or some slightly smaller number? Hardly. But “muddying the narrative” with FERPA as an excuse makes it very difficult to check on the quality of the calculations used by the state. So much for accountability.
4) Combination schools: Bulletin 111, which supposedly defines how this SPS “accountability” system is supposed to work, appears to put twice the weight on high school students versus elementary and middle school students when calculating a score for a school with students in both grade ranges. Unfortunately the public is not privy to the exact number of units on each side to see if, for example, the K-8 component consists of only one student, or most of the students, or some other ratio. There might be a legitimate privacy concern if there are only a few students in either group, but we could at least be given the two component SPS scores (K-8 and 9-12) from which the composite scores were calculated.
5) Overall grade distribution: Of the 1315 schools given a grade, 106 got Fs, 264 got Ds, 374 got Cs, 384 got Bs and 187 got As. What does this mean? Absolutely nothing.
6) K-8 assessment index distributions: Of 1177 K-8 assessment indices, there were 160 Fs, 299 Ds, 315Cs, 265 Bs, 100As, and 38 not reported due to low enrollments. So there would have been a lot more potential takeover targets before bonus points were thrown in. Where available, the dropout/credit accumulation index mentioned below also would generally have moved the scores up. The hideous characteristic about the K-8 assessment index (which also applies to the high school EOC and ACT indices) is that a student either earns the school an A (100 points for Basic, 125 for Mastery and 150 for Advanced) or an F (0 points for either Approaching Basic or Unsatisfactory). And the difference between Approaching Basic and Basic is one multiple-choice test question.
7) Dropout credit accumulation index: This component is ridiculous for at least two reasons. First, it appears to judge a school containing eighth grade based on what the 2011-12 eighth grade class did at their high school in 2012-13. Note that the high school doesn’t get judged directly on credits accumulated by those students. Furthermore, almost everyone got an A. Of 448 schools given a dropout-credit-accumulation index, only 7 got index values between below 50, the F range. Another 4 were Ds, 10 Ds and 9 Bs. The remaining 418 schools with eighth grade got index values above 100, the A range. A student contributes 100 points to the index if, by the end of ninth grade, they have at least five high school credits. Is this raising the bar? The pattern seems to raise one bar too far, and then drop another to compensate.
8) The End-of-Course Assessment Index is reported, at least in theory, for 352 high schools, although 16 apparently had fewer than 10 students and hence no data reported. Of the remaining 336, the index values were in the F range for 87, D 103, C 85, B 39 and A 22. Yep. John White’s magic formula says 87 schools flunked the End-of-Course Assessment index and only 22 got an A. These data have the same problem as the elementary assessment index data — a student either contributes 100 points or more (worth an A) or zero points (lowest possible F score). Again the difference is one multiple-choice problem. There is no difference in SPS terms between getting every answer wrong and getting the highest-possible “non-proficient” score. But one more correct answer turns an F into an A. Then enough additional correct answers might make the score a higher A. And the state won’t even share the histogram of how many students got each of the possible scores.
9) The ACT Index is even more perverse in its distribution than the End-of-Course Assessment Index. Of 321 high schools, 12 had insufficient numbers to report data. Of the remaining 309, the distribution was 104 F, 105 D, 61 C, 24 B, and 15 A. It should be noted that two schools with identical average ACT scores can have wildly different ACT index values. For example, a school with an average ACT of 17 could get an ACT index as low as 0 (if everyone got a score of 17) or as high as 94.1 (if 16 students got 18s and the other one got a zero). In reality, the variability would not be as extreme. However, shifting from a score of 17 being worthless in SPS terms to an 18 being valuable just seems bizarre.
10) Graduation Index: Of 306 high schools with reported graduation index values, 10 were in the F range, 20 D, 85 C, 125 B, and 66 A. So maybe this is supposed to compensate for the opposite skew on the End-of-Course and ACT data? It’s also worth noting that there is a one-year lag in the graduation index data beyond the compression of four year’s worth of history into one reporting interval. Thus a student who dropped out October 24, 2008 (five years to the day before the scores were released) would finally count (as a zero) on the 2013 School Performance Score. And a high school whose entire senior class dropped out last year could still have an excellent graduation index because of the one-year delay. Go figure.
11) Graduation Rate: The same problems mentioned with Graduation Index recur with the Graduation Rate. The data are all one year old, and a dropout from five years ago affects the current score. Also, the conversion from graduation rate as a percentage to the Graduation Rate Index is generous. There were only 10 Fs, 15 Ds, 32 Cs, 53 Bs and 196 As. All it takes to get an A is for 3/4 of the freshmen to last through graduation. So much for high standards.
12) Bonus Points (High School): John White has suggested that poor scores on ACT or End-of-Course indices would be compensated by the bonus points. In fact, that was his testimony before a lesislative committee. The reality? Of 138 high schools with no K-8 component, exactly 7 were given bonus points. They were:
- Caddo Parish Magnet High School (already an A before the 4.3 bonus points)
- Live Oak High School (Livingston Parish, already a low A before the 3.7 bonus points)
- Edna Karr High School (Orleans Parish, a B school with or without the 3.7 bonus points)
- International High School of New Orleans (charter school, a D school with or without the 4.8 bonus points)
- O. Perry Walker Senior High School (Recovery School District charter, 9.7 bonus points changed a C to a B)
- Sarah Towles Reed Senior High School (Recovery School District, 10 bonus points changed an F to a D)
- G.W. Carver High School (Recovery School District, a D with or without the 10 bonus points)
On the other hand, of 963 K-8 schools with no high school component, more than half (498) got bonus points. Of those, 218 got the maximum 10 bonus points. Of the 214 combination schools (at least some K-8 students and some high school students), 123 got some bonus points, and 55 got all 10. Unfortunately there is no easy way to see whether those were mostly earned by high school students or by elementary students. And there’s also no provision for letting the public know on which tests the non-proficient students demonstrated their improvement enough to earn bonus points.
Lucky 13) But wait, there’s more! It’s easy to get lost in the details of how weird the mathematics are, but that misses the point. The state is making career-ending and neighborhood-disrupting decisions based on an ever-changing, arbitray, opaque process. Never has it been made clear what a score of A, B, C, D or F should truly represent in terms of school quality. And even if that were clearly defined, it likely wouldn’t be measured by a set of once-per-year tests. And even if the annual tests truly measured either student achievement or instructional quality, Louisiana’s accountability system ignores what happens at the beginning of a student’s educational career (since the first test that counts is at the end of third grade) or at the end (since End-of-Course tests and ACT are normally completed in the first three years of high school). It’s ridiculous that scores aren’t released before the end of summer, and it’s asinine to judge the quality of a school in 2012-13 by the accomplishments of the Class of 2012.
I’m not sure this pseudo-accountability system is worth fixing. But releasing more data and releasing it earlier would help while we wait for the Legislature to come to its senses and demand an accountability system that improves education. The current system serves the interests of those who would destroy public education, not those who would benefit from its improvement.