District-Level NAEP?

K-12 Testing

Last year's reauthorization of the National Assessment of Educational Progress allows NAEP to report test results at the district level for the first time. Now, the U.S. Department of Education has solicited applicants to participate in a trial program of the new reporting breakdown. NAEP tests samples of students in a variety of subject areas in grades 4, 8 and 12.


Critics, including the National Education Association, the American Federation of Teachers, the American Association of School Administrators, and FairTest, warn that use of NAEP at the district level will ultimately corrupt the assessment because schools will end up teaching to it, producing inflated scores. As Congress revisits education programs approved in the past, district NAEP may come under renewed scrutiny.


At its inception, NAEP reported data nationally, by four large geographic regions, and by demographic groups. In 1988, NAEP was authorized to conduct trial state-level assessments (see Examiner, Spring 1988, Winter 1993-94).


Critics have charged that the National Assessment Governing Board, established in 1988 to share oversight of NAEP with the U.S. Department of Education, has been attempting to establish NAEP as a national exam. Opponents of state and district NAEP reporting view it as a "slippery slope" toward a national test with individual scores.


Proponents argued that NAEP is superior to traditional achievement tests and could provide better state data. Once state-level reporting was approved, they argued that since some districts are larger than some states, districts too should have access to this superior data.


NAEP exams, unlike standardized achievement tests, do use performance tasks in addition to multiple-choice items. They are not norm-referenced, but are reported according to student achievement-level attainments. However, the levels and the level-setting process have been subject to strong criticism from prominent researchers, and their validity has been challenged (see Examiner, Fall 1991, Spring 1992).


The value of the data at the state level is also questionable, an issue that also will exist for district use. Proponents have argued that NAEP will help states improve areas in which they are weak. However, one study showed that 89 percent of the variance between states could be explained by four economic and social factors, none of them controllable by schools (see Examiner, Fall 1994). Additionally, if a state is weak in a particular subject, it may be because the state or its districts choose not to focus on that area at the time it is tested by NAEP.