What is Data Driven Reform

Status: 
Archived
Subject: 
K-12 Testing

It used to be that schools and even districts talked first about children. Now they talk about test scores and data. “Data-driven” is the latest buzzword sweeping education accountability circles.

 

Ostensibly, it means using information to spur improvements in teaching and learning. In practice, data-driven often appears to mean no more than narrowing curriculum and instruction to fit the standardized tests used to hold schools and students accountable. “Data-driven” usually becomes “teaching to the test.”

 

Most data used is simply test scores – broken into sub-scores and perhaps disaggregated by population groups. While the state tests used to rank schools for NCLB are the primary source of data, increasingly, districts and schools administer local tests, designed to mimic the state exam (or at least the multiple-choice items) and thus provide more “data” to prep students for the big test.

 

Peter Farrugio, a former elementary school teacher in California and a test-reform activist, describes what “data driven” means in his state:

 

“There is uniformly no consideration given to alternative or multiple outcome measures, because the state doesn’t allow them into the high-stakes [Academic Performance Index].

 

"Likewise, there is no availability of schools’ background data, like the percentage of special ed students or the number of monthly [individualized education plan] meetings a principal must attend, or a school’s improvement in attendance rates and student suspensions (which would be indicators of improved school climate, which would facilitate improved learning opportunities). Therefore, the typical annual discussions of ‘data’ and how to become ‘data driven’ include the self-evident observations that, for example, the English Learners continue to score low, and seem to be immune to various didactic treatments to raise their scores on a test in a language that they still don’t understand.”

 

In contrast to this sort of “data-driven” reform, researchers have extensively documented the educational power of “formative assessments,” which are specifically designed to provide feedback to students to help them learn, distinct from “summative” assessments such as annual state tests (see Examiner, Spring-Summer 2004, Summer 2002). But in a misuse of the terminology, mini-tests designed to prepare students for high-stakes state exams are now increasingly referred to as “formative” assessments or “assessments for learning.”

 

As with the other hot buzzword, “value-added,” (see Examiner, Summer 2000) the underlying problem remains “Garbage In, Garbage Out.” Some of this information is useful. The danger, as with teaching to the test, is that other information is not included, that the curriculum is narrowed to the limited set of skills and knowledge that is on the test, and that teacher skills are reduced, not enhanced.

 

Once again, education reform is being conducted on the cheap, avoiding the costs of professional development and small class sizes really needed to improve schooling, within a model that funnels most of what is spent to corporate testing coffers. Instead of young human beings learning to become adults, we have objects who are processed as data and deemed to have “value” only when their test scores rise.