Measurement Driven Instruction

Status: 
Archived
Subject: 
K-12 Testing

Despite substantial criticism, the practice of high-stakes testing again appears to be growing. For example, four states have decided to introduce mandatory high school exit exams over the next four to five years, and many states are placing high stakes on schools themselves. (See articles on Chicago, North Carolina and Texas, this issue).

 

This approach is fueled by the idea that since teachers usually will teach to tests they perceive as important, school systems and states should make teaching to the test an explicit policy -- provided that such instruction does not utilize the actual test. High stakes for students and schools emphasize the importance of the tests.

 

This idea gained some prominence in the early 1980s with the rise of minimum competency tests under the name of measurement driven instruction (MDI). By the late 1980s, teaching to the test was subject to sharp criticism. Then, in the early 1990s, particularly through the work of New Standards, a national organization developing standards and performance assessments, the approach shifted toward teaching to performance assessments. Supporters claimed this approach would improve curriculum, instruction and student learning, but critics said it would just repeat the problems of MDI. While the jury is still out on MDI with performance assessments, many states and districts are once again using very limited norm-referenced, standardized tests to define the curriculum and shape instruction.

 

The claim that teaching to the tests will lead to real learning gains is highly questionable. Numerous studies have found that teaching to the test simply tends to produce inflated scores, as first made infamous by the Lake Wobegon report finding all states were above average. Despite widespread use of MDI based on traditional standardized tests, scores on National Assessment of Educational Progress exams (which are not taught to) have changed only slightly since the late 1960s and early 1970s. A recent research summary also found a lack of evidence to support the claim that student motivation was enhanced by high-stakes testing, and some evidence that motivation might actually decrease (see Examiner, Winter 1996-97).

 

Meanwhile, it is quite clear that school programs which focus heavily on the tests fail to engage students in work that will help them learn to really think, solve problems and apply knowledge. In Massachusetts, some teachers already are expressing alarm that the state's new high-stakes exams, which will begin next year, will force them to stop using extended, in-depth projects in order to find time for the mile-wide but inch-deep coverage of content that they believe will be on the state tests. That is, memorization and routine procedures, separated from thought and application and therefore soon forgotten, will be official state practice, enforced through testing.

 

Despite the lack of evidence to support measurement-driven reform, many politicians continue to promote this approach. As has become clear over the past year or two, assessment reform is being pushed back by reversion to outmoded practices reminiscent of the factory model of efficiency which swept schools in the 1920s and whose effect is to de-skill teachers and narrow curriculum, thereby limiting students learning and driving many students away from learning and often out of school entirely.

 

Detailed critiques of measurement-driven instruction and reform, based on Arizona, have been written by Mary Lee Smith and her colleagues. See Sith, The Politics of Assessment: A View from the Political Culture of Arizona (CRESST CSE Report 420), UCLA, Dept. CSE, 301 GSEIS, Box 951522, Los Angeles, CA 90095; (310) 206-1532; $3.00 + $1.50 postage.