Is NAEP a Fair and Valid Benchmark?

Status: 
Archived
Subject: 
K-12 Testing

Many more students reach the "proficient" level on states own tests than score "proficient" on the National Assessment of Educational Progress (NAEP) tests. As a result, a push has begun to make the NAEP levels not only independent benchmarks but also enforceable national standards (see national test article, this issue).

 

There are certainly many reasons to oppose a national test, but the claim that the NAEP levels represent some sort of "gold standard" is not true.

 

The National Assessment Governing Board (NAGB) oversaw setting of NAEP levels. A set of experts hired to evaluate the results were fired after sharply criticized the levels-setting process. Subsequently, the National Academy of Sciences, National Academy of Education, General Accounting Office and many independent researchers all reached the same conclusion: the levels-setting process was flawed.

 

In fact, the levels were set unreasonably high, according to all reports. That profound flaw has been generally ignored by proponents of using NAEP as an enforceable standard. The National Center for Educational Statistics is now engaged in a detailed study to compare state and NAEP results. It could lead to another round of breast-beating about low state standards - despite the unjustifiably high levels set on NAEP tests.

 

Another way to approach this issue is to determine whether the levels on state tests are in fact reasonable. The Pennsylvania State Board of Education commissioned a study of its state tests, the PSSA. Well over half the students who scored "basic" or "below basic" on PSSA enrolled in non-remedial math courses in three large universities. Yet these students had failed the PSSA based on a cut-off of "proficient." And far more students in Pennsylvania are "proficient" on PSSA than on NAEP. It is NAEP's levels that are at fault, not student performance.