SAT I: A Faulty Crystal Ball

Status: 
Archived
Subject: 
University Testing

Once again trying to defend its lucrative product, the College Board is aggressively pitching a new study on the SAT’s ability to predict college success. Though released by the University of Minnesota, the “unbiased” report was funded by the College Board and co-authored by a Board Vice President.

 

Despite some fanfare accompanying the release of the study, very little new information was presented. Results showed that SAT combined scores (Verbal plus Math) were moderately related to freshmen GPA (FGPA), with a correlation of .53 (hence, explaining a bit more than a quarter of the variance in grades). This level of predictive validity is actually less than that of high school GPA (.54), as indicated in several College Board publications, demonstrating once again that a student’s high school record is still the single best predictor of college performance. The study did not address the predictive validity of SAT scores combined with high school GPA.

 

The research also found that test scores diminish in their predictive power as an individual progresses through college, with the SAT-V and SAT-M relating only slightly to degree attainment (under .2). This is hardly surprising since the SAT is only validated to predict first-year grades. James Crouse and Dale Trusheim explore this topic in The Case Against the SAT (see order form, p. 15), pointing to the SAT’s weak impact on predicting college graduation. Data they analyzed demonstrated that using the high school record alone to predict who would complete a bachelor’s degree resulted in “correct” admission 73.4% of the time, while using the SAT and high school GPA forecast “correct” admissions decisions in 72.2% of the cases.

 

The University of Minnesota study was compromised by heavy reliance on research by the College Board and Educational Testing Service (ETS), the SAT’s sponsor and manufacturer. Analyses conducted by institutions with no vested financial interest often provide a more accurate picture of the SAT’s true predictive power. For example, in a chapter in The Black-White Test Score Gap, Frederick Vars and William Bowen revealed that a 100-point increase in SAT total scores led to a negligible one-tenth of a grade point gain for college GPA, based on the experiences of 10,000 students at 11 selective public and private institutions of higher education. This offered about the same predictive value as looking at whether an applicant’s father had a graduate degree or her mother had completed college.

 

Bates College, a “test-score optional” liberal arts school in Maine, conducted its own validity studies over a five-year period to determine the most powerful variables for predicting undergraduate success. Bates found that students’ evaluations of their “energy and initiative” as reported in self-evaluations added more to the ability to predict college grades than did either the SAT-M or SAT-V. While SAT “non-submitters” averaged 160 points lower on the exam, their FGPA at Bates was only five one-hundredths of a point lower than that of “submitters” (see Test Scores Do Not Equal Merit, order form, pg.15).

 

Many institutions conducting their own validity studies will come to the same conclusion the nearly 400 colleges and universities that de-emphasize or eliminate test scores have reached: the SAT adds little information about students’ potential future performance beyond what can already be found through high school grades and rigor of classes taken.

 

To learn more about the shortcomings of the SAT's predictive ability, read FairTest’s newly released fact sheet “Demystifying the SAT’s Ability to Predict College Success,” available on-line at http://www.fairtest.org/facts/satvalidity.html, or by sending a self-addressed stamped envelope to SAT Validity c/o FairTest, 15 Court Square, Suite 820, Boston, MA 02108.