Computer to Grade GMAT Essays

Status: 
Archived
Subject: 
University Testing

Beginning this winter, a computer will score the essays written by the 400,000 exam-takers who sit for the Graduate Management Admissions Test (GMAT) each year. "E-Rater" software developed by the Educational Testing Service (ETS) will replace one of the two humans who now read each essay.

 

ETS claims the new computer scoring formula compares each candidate's word choice and sentence structure with a database of writing samples of varying quality previously graded by experienced readers. Essay length may also be a factor in assigning grades.

 

Unique writing styles, atypical examples, and uncommon words may all confuse the mechanical grading process. If the computer score differs significantly from the one human reader, another person is supposed to reread the submission and assign a final result.

 

Some published research indicates a high correlation between scores assigned by human graders and a carefully programmed computer. But two ETS researchers writing in the Winter 1998 issue of the official journal of the National Council on Measurement in Education warn that relying only on technical characteristics can de-value writing. ETS Director of Research Randy Elliot Bennett and Principal Research Scientist Isaac I. Bejar note, "a report dominated by diagnostic information concerning easily recoverable mechanics would send a very different message than one emphasizing the more complex aspects of writing."

 

Despite the methodological cautions, test-makers are likely to press ahead with exam computerization due to economic pressures in their industry (see Examiner, Spring 1997). The makers of the GMAT claims that about one-third of the current $150 registration fee goes to essay scoring by humans.

 

GMAT administration was completely computerized in the fall of 1997. Last winter a system-wide "crash" left 1,300 test-takers staring at blank screens (see Examiner, Winter 1997-98). Since that time, many GMAT candidates have questioned the accuracy of their scores and complained about test administration problems.

 

- "Validity and Automated Scoring: It's Not Only the Scoring," Educational Measurement: Issues and Practice , V. 17, N. 4.