Facebook Twitter Instagram Youtube

How we calculated our index

How we calculated our index

We didn’t derive an independent index -- we reviewed existing indices based on a four-step process.  First, we defined an explicit goal, then we choose criteria for selecting an index, the we identified statewide indexes or data that could be indexed, and only then evaluated what we found and selected an index.

First:  The Goal of The Index

Over the past five years, we have been talking to parents around the state, reading articles and board packets, watching webinars and interviews, and attending school board meetings.  Over this time, a sense developed that there was a hidden quantitative difference between districts. Hidden and quantitative because:

  • there were obvious quantitative differences between basic-aids and LCFF-funded districts, and
  • there were obvious qualitative differences between boards, both across and within types of districts.  

But there was some other factor that meant certain boards — often in clusters — seemed to have a much harder struggle than others.  This became evident when we actually drilled down in the NPR/Education Week report, Why America’s Schools Have a Money Problem, which highlighted inequity in California — but not the inequity that everyone seems to assume.

This interactive graphic, combined with the WestEd paper on the Silent Recession in California schools, collided with the California Budget Policy Center’s paper on Making Ends Meet.   We realized that pain elsewhere was agony in certain areas — pockets that were too easily compared with their basic-aid neighbors, hence not compared with their peers elsewhere in the state.

SO, the goal in identifying a useable index was to find one that would:

  • accurately reflect regional economic diversity
  • allow quantification of any extraordinary financial restriction facing individual districts

Both in the context of statewide ‘equitable’ funding for schools.  

Thus, the index could both serve as a guidepost for:

  • alleviating excess pain (how much would it take to address the worst of the inequity?),
  • evaluating Administration and Board financial competence, and
  • evaluating conclusions from academic research.
    • If we call districts “underserved” by certain programs, but they’re primarily located in the Bay Area, Orange, San Diego and Ventura, we are probably measuring a weakness in LCFF, not in that particular program.

Second:  The Criteria for a Productive Index

Any numbers that one can find, disaggregated by county, provide a candidate for an index.  The following qualities strengthen the case for the results any index yields:

  • Identifies the same counties that other credible indices flag, in roughly the same rank order
  • Was created and is maintained by an organization that is unlikely to be manipulating it in favor of specific California counties
  • Has independently verifiable inputs, ideally easily accessible via the internet
  • Is refreshed annually, not a one-time or discontinued study
  • Reflects the proximity issue in the teaching profession, namely that the job is M-F, 8 am to 4 pm, cannot be performed offsite, cannot be refitted into a four-day-ten-hour workweek or a 96-on/96-off schedule, doesn’t tolerate tardiness due to traffic delays, etc.
    • Thus local housing is less work-aroundable in teaching than it is for many other professions, so
    • Local housing costs are more, rather than less, emphasized
  • Isn’t measuring something with a common dependent variable to school funding
    • City and county official comp packages are inversely related to school compensation in urban areas, because AB-8 shifted property taxes from school districts to co-located counties, cities and special districts.   Thus we find very high city/county comp packages where we find very low school allocated property tax — which means those packages are likely overstating any regional cost difference.
      • We don’t want to measure either school funding OR inverse school funding in a county.
  • Is modulated enough that it is useable without decrementing for non-compensation costs, since even those tend to be driven by the local economy (except textbooks and electronics).
  • Pivots around Los Angeles County
    • As the biggest fish in the pond, LCFF is inherently built around LA’s 25%+ share of statewide ADA
  • Partially compensates for the weakness of choosing FRPM as a benchmark for disadvantage.  
    • FRPM is a national number, thus already low for California as a whole, but particularly restrictive in high cost-of-living areas.  
      • Over 70% of a FRPM-qualified family's income would be consumed by HUD’s 40th percentile housing allowance in Marin, San Francisco, San Mateo and Santa Clara.   50% or more in San Diego, Santa Barbara, Orange, Contra Costa, Alameda and Santa Cruz. 46% in Los Angeles, which I consider the pivot point.

Third:  The Candidates

Fourth:  A Chosen Index

The index that came closest to meeting the criteria laid out above was based on the MIT Living Wage data:

  • It correlates highly with all the other potential indices.
    • In particular with the CBPC, whose components have the benefit of being objectively verifiable by individuals (correlation coef. .990).
    • But also with HUD and CA AGI (.967 and .880).
    • The fact that it ties both at the making-ends-meet and filing-a-joint return ends of the spectrum is comforting.
  • It is well documented online.   
  • It was created by a credible external organization.
  • It is influenced significantly, but not overwhelmingly, by housing costs.

This is not to say that it cannot be criticized.  To embrace it, one must accept that addressing LCFF's inequity to children in low-property-wealth districts embedded in high cost-of-living areas justifies accepting that:

  • The current (2017) release is based on CY 2014 data.
  • The actual measurement is a $/hour wage, which is low in relative terms.