SAT, ACT Gender Gaps

Status: 
Archived
Subject: 
University Testing

Test scores for members of the high school class which graduated last spring rose slightly from the previous year's levels. At the same time, the gap between male and female test-takers grew on both the ACT and SAT.

 

On the SAT's 400-1600 scale, average scores climbed by 3 points. All of the increase was on the Math portion of the exam.

 

The ACT, which uses a 4-36 scale, reported that scores rose by one-tenth of a point, a proportionately similar gain. For the past two decades, college-bound testtaker performance has remained generally stable, with a slight-but-steady upward trend in math and small year-to-year variations in verbal scores.

 

Despite a growing body of evidence demonstrating that the test consistently underpredicts the academic ability of young women (see Examiner, Winter 1996-97), the SAT gender gap actually increased by one point to 40. Females now score 36 points below males on the Math portion of the SAT and 4 points lower on Verbal. The ACT gender gap grew by one-tenth of a point to .3, the equivalent of about 12 points on the SAT. Changes to the Preliminary SAT (PSAT) made to settle FairTest's gender bias civil rights complaint (see Examiner, Summer 1997) have not been applied to the SAT, ACT or tests used for graduate school admissions.

 

The sole scientific claim of the SAT and ACT remains their capacity to modestly predict first year college performance, yet women earn higher undergraduate grades than men despite their lower scores. Either the nation's colleges and universities are wrong about what academic merit is, or the SAT and ACT are fundamentally flawed. Even the testmakers admit that differences in demographics and preparation fail to account for the majority of the gender gap (see related article, this issue).

 

To divert attention from their test's continuing gender bias and other problems, the College Board tried to focus media attention on "grade inflation," apparently hoping to justify continued use of the SAT as a more stable measure. But that argument is explicitly contradicted by ETS's new book Gender and Fair Assessment, which concludes, "If college grade inflation or deflation affects all students equally, it should not change the correlation between college grades and predictors."

 

In reality, score inflation for individual test-takers whose families can afford $600 to $700 for commercial SAT preparation courses (let alone $1500 for in-home, private test tutoring) is far more of a problem for admissions offices. How can a college know how much a particular applicant's score was boosted by coaching? Whatever the SAT measures, is a student's score of, say, 1200, accurate or artificially inflated? These unanswerable questions are a major reason why an increasing number of institutions are adopting test-score optional admissions (see Examiner, Summer 1997).