Accountability in Higher Ed

Status: 
Archived
Subject: 
K-16 Testing

Over the past few years, accountability issues have surfaced in higher education, much as they did in the 1980s in K-12 public schooling. Chris Gallagher argues that many of the people who brought high-stakes standardized testing to prominence as the key accountability 'reform' in public schools are behind the current push in higher ed. If we are not vigilant, he says, we could see standardized testing emerge as a powerfully controlling force in post-secondary institutions, causing damage similar to the harm done to public schools by state and federal test-based accountability policies.

 

Believe It: NCLB-Style Accountability Extends to Higher Education
Chris W. Gallagher

 

Secretary of Education Margaret Spellings has been busy of late denying that her Action Plan for Higher Education, based on the report of the Commission on the Future of Higher Education, amounts to the expanding the No Child Left Behind Act into higher education. But while her plan might not call for the direct expansion of NCLB policies, it is clear that various forces-what amount to a lineup of usual suspects-are aligning to extend the reach of NCLB-style test-based accountability into the halls of academe:

 

The federal government. Reminiscent of the 1983 Commission for Excellence in Education, whose A Nation at Risk ignited test-based accountability with its Cold War call to stem the "rising tide of mediocrity," the Commission on the Future of Higher Education has challenged the academy to shake off its purported complacency as the rest of the world "catches up." Charged by Education Secretary Margaret Spellings to investigate what Americans get for their investment in higher education, the Commission held several forums and issued five reports, the first four versions "for discussion purposes only." Although the language of the drafts softened over time, the same basic message carried through: the system is too complacent and inefficient to respond to the newly competitive global marketplace, and urgent reform is needed. Change may be leveraged by accreditation requirements and federally-funded state accountability systems that report student outcomes via a "consumer-friendly" database that will allow institutional and interstate comparisons.

 

Secretary Spellings lost no time endorsing the Commission's accountability recommendations. She proposed an Action Plan that includes creating a federal student database, providing funds to institutions that use and report the results of standardized tests, and challenging accrediting groups to require reporting of student learning outcomes (as many, it should be noted, already do). Indeed, a U.S. Department of Education oversight committee has begun cracking down on accrediting groups-such as the American Academy for Liberal Education-that it deems too lax on student learning standards. Meanwhile, Senate leaders inserted into a draft bill on improving educational competitiveness a proposal to set aside federal funds to help states design databases to track students from kindergarten through college. Though the proposal is not likely to pass, it indicates Congress's interest in the Commission's recommendations.

 

The Doomsayers. Just as A Nation at Risk and the test-based accountability movement were given wings by the "literacy crisis" of the 1970s, now comes feverish national media attention on the National Assessment of Adult Literacy's "A First Look at the Literacy of America's Adults in the Twenty-first Century" and the American Institutes of Research (AIR)'s "The Literacy of America's College Students." Once again, the focus is on "functional literacy," but this time the literacy in question is that of college students and graduates.

 

Despite a spate of serious questions about the methodology and technical quality of the NAAL test-first conducted in 1992 and then again in 2003-the national media has trumpeted its findings regarding declines in college literacy. It has not highlighted the increases in literacy performance revealed by the 2003 study, including jumps in quantitative literacy overall and in prose literacy for African-Americans and Asian- Americans. Nor has it spread the news that the more education a person has, the higher his/her literacy level. Unlike the federal government's December 15, 2005 news release-which did not mention decreases in college literacy rates-the national media have been quick to stoke crisis discourse. This has been the case especially in reference to the second study: AIR's administration of the NAAL to 1,827 graduating students at 80 randomly selected two- and four-year institutions, put the spotlight on college literacy. Once again, the news was not all bad: For instance, the assessment revealed that the literacy levels of college students far outstrips those of the general population and that only miniscule percentages of college students score "Below Basic." But the Associated Press (AP) story that circulated in newspapers and online news sources announced college students' "dismaying" lack of literacy skills: "Nearing a diploma, most students cannot handle many complex but common tasks, from understanding credit card offers to comparing the cost per ounce of food…More than 50% of students at four-year schools and more than 75% at two-year colleges lacked the skills to perform complex literacy tasks" (Feller).

 

These numbers involve some sleight of hand. The NAAL defines the performance category "intermediate"-the second highest-as involving skills required to perform "moderately challenging literacy activities." The definition of "proficient," meanwhile, indicates skills required to perform "more complex and challenging literacy tasks." Clearly, the intention is not to reserve "proficient" for "complex" tasks. In fact, the qualifier "more" in that definition indicates a degree of complexity (albeit lower) in the intermediate tasks as well. If we are looking at "complex" tasks-it would be more accurate to use the NAAL's term "challenging," which again spans the two categories-here are the percentages of students performing in the top two categories:

 

  Four-Year Two-Year All Adults
Prose 94% 88% 57%
Document 92% 95% 66%
Quantitative 80% 67% 46%

 

Though these data may raise concerns, they hardly substantiate doomsday scenarios.

 

The testocrats. Testmakers, test-prep companies, textbook companies, and private Educational Management Organizations and supplemental service providers have reaped enormous profits from test-based accountability. And right on cue, today's purveyors of standardized tests are hardly shy about stoking calls for postsecondary accountability. For example, Richard H. Hersh, co-director of the Collegiate Learning Assessment (CLA), writes in The Atlantic Monthly about the need to end "faith-based acceptance" of the quality of U.S. higher education. Grades, surely inflated, are not to be trusted; only the "hard evidence" of standardized testing will do. Hersh helpfully reminds readers that several tests of this sort are already available-including his CLA, which measures "value added" to students by an institution, allowing us to compare institutions to determine which provides the most bang for the buck.

 

The Educational Testing Service also has stepped forward to offer its services. In June, ETS released a well-timed paper, A Culture of Evidence: Postsecondary Assessment and Learning Outcomes. The paper argues that we need "hard evidence" of the effectiveness of U.S. higher education in the context of an increasingly competitive global marketplace, and that only a national, data-driven approach-an "econometric model"-will do (27). ETS identifies four "dimensions" of student learning-workplace readiness and general skills, domain-specific knowledge and skills, soft skills (such as teamwork and creativity), and student engagement-but its recommendation is to focus, for now, on the first (9). The other major recommendation to policymakers: convene an expert panel to review the Assessment Framework Template included in the paper. The Template is a checklist matching existing assessment instruments with the four dimensions of student learning. Several of those instruments are ETS-designed tests, and others-like the CLA-are supported by ETS products.

 

Some believe that despite the machinations of these usual suspects, the famously autonomous culture of higher education and its diversity make it inhospitable to the kind of test-based accountability that NCLB has imposed on schools. But Secretary Spellings' avowed interest in using her "bully pulpit" to reform the accreditation system and her commitment to tying funding to the use of standardized tests should give proponents of this view pause. As for the claim that U.S. higher education is too diverse for standardized measures, consider that even as the Spellings Commission back off of its early noises about standardized testing-despite the strong support for such testing by its Chair, Charles Miller-some powerful voices within higher education are scrambling to prove themselves test-friendly.

 

For instance, the National Association of State Universities and Land-Grant Colleges (NASULGC) and American Association of State Colleges and Universities (AASCU) jointly released a report in August called Toward a Public Universities and Colleges Voluntary System of Accountability for Undergraduate Education (VSA). Here, the organizations affirm the need for "value added" accountability measures that allow comparisons of like institutions. Almost apologetically, the report writers announce that they cannot recommend a single such measure (extant tests are as yet unproven). Instead, they suggest that participants in the system pilot a few outcomes tests, leaving open the possibility that one "best" test will emerge. Because this document proposes a voluntary system, and perhaps because it is long and verbose, it has been criticized as a "weak" response to the Spellings Commission. But this reading ignores that fact that NASULGC/AASCU announce their willingness to go much farther down the road of standardized testing than even the Commission was finally willing to go.

 

The report writers are careful to note problems with the outcomes tests now in use, including the currently fashionable CLA. Piloting of the CLA has raised a host of concerns, including potential "motivation bias" (and the specter of students sabotaging the results of what is for them a meaningless exercise); the small samples (typically 100 freshmen and 100 seniors); the quality of the prompts (which for example ask students to pretend to write a memo to a fictional boss about buying a private plane); the giving over of instructional time to administer the exam; and the unfairness inherent in counting the results of the exam in courses that don't teach the content. (This last is one proposed solution to the motivation bias problem; another is to award an iPod to the highest scorer.) And yet, the CLA and other outcomes tests are gaining a foothold in institutions across the country and solidifying the testing culture in higher education.

 

Higher education never was immune from the test-based accountability agenda of the 1970s and 1980s. It was during that period, for example, that City University of New York (CUNY) collaborated with the testing establishment to create the CUNY Writing Assessment Test and set up elaborate structures of remedial courses to regulate access to the credit-bearing curriculum. In recent years, the New York State Regents Exam has been made a graduation requirement for all city high school students. At CUNY, remedial courses have been eliminated from the senior colleges, a new entrance exam has been instituted, and a controversial, three-strikes-you're-out rising junior exam has been implemented. ("Rising junior" tests have been used for some time in states such as Texas and Florida to limit access to the final two years of college.) These decisions have created divisions among the once markedly collaborative CUNY faculty: instructors at the two-year schools teach in anticipation of the entrance exam, and instructors at the senior colleges teach in anticipation of the rising junior exam. Although the latter exam exerts less direct pressure on faculty than the former, the tests-one regulating access to the "regular" curriculum and the other regulating access to advancement-shape curriculum and instruction up and down the system.

 

As we have seen, however, the Spellings Commission and NASULGC/AASCU have in mind a particular brand of institutional accountability: the ranking of institutions based on the results of outcomes-based "value added" tests. An important front in this effort has been opened up by the surging P-16 movement. Approximately 28 states have some kind of P-16 initiative underway. Typically, these initiatives are spearheaded by governors, university regents and presidents, chief state school officers, state school boards, legislative education committees, and business leaders. At the national level, P-16 is sponsored by the Education Trust, a purportedly non-partisan research organization that touts high-stakes standardized testing and NCLB (and which has a representative, Kati Haycock, on the Spellings Commission); the Education Commission for the States, an education policy research clearinghouse with a similar affinity for test-based accountability; and the National Association of System Heads.

 

As a primer in the March 10, 2006 Chronicle of Higher Education makes clear, P-16 is touted by a veritable who's who of right-wing reformers, including Haycock, Chester Finn, Diane Ravitch, and Eugene Hickok. (Hickok, in a scathing editorial in the New York Times, has called for the application of the "underlying principles" of NCLB to higher education, which he insists is "seriously out of touch with much of America.") Despite the attractiveness of its Big Idea-that students benefit from a "seamless educational pathway"-P-16 functions largely as a Trojan Horse, delivering the test-based accountability agenda to higher education in the form of value-added tests. The language that Indiana's Educational Roundtable uses to describe the purpose of its P-16 initiative demonstrates the values informing this test-based push: "to evaluate current expenditures, realize efficiencies, leverage resources, prioritize strategies, and make critical investments to bring about the student achievement outcomes the state desires" ("Indiana's" 2).

 

This passage almost sounds like a parody, but it is of a piece with the accountability-driven language increasingly used in administrative offices of institutions of higher education, where MBAs busily design templates for "strategic prioritizing," funding streams are diverted from academic departments to "programs of excellence," and test-based accountability is becoming the coin of the realm. Make no mistake: higher education "accountability" is of an ideological piece with the test-based, competition-driven, consumerist model that brought us No Child Left Behind in the first place.
The irony here is that this is a "faith-based" move; there is no evidence that this approach to educational reform actually works. On the contrary, it has led to a raft of negative consequences including narrowing of curriculum, focus on lower-order skills, increasing dropout-or pushout-rates, decreasing graduation rates, growing student attrition or retention in grades preceding testing grades, and cheating scandals. A further irony is that despite sneering representations of higher education as by turns lazy and recalcitrant, postsecondary faculty and organizations have done a great deal of work in assessing and documenting student learning. A small, and by no means exhaustive, list would include the following:

  • The American Association of Colleges and Universities, a leader in liberal education, provides resources for assessing and documenting student learning for General Education (www.aacu.org/issues/assessment)
  • The national Peer Review of Teaching Project is dedicated to making visible the intellectual work of teaching by systematically investigating, analyzing, and documenting undergraduate students' learning (see www.courseportfolio.org)
  • The Carnegie Foundation for the Advancement of Teaching and Learning sponsors the Carnegie Academy for the Scholarship of Leaching and Learning (CASTL), which is also dedicated to making teaching and learning public and "shareable" (see www.carnegiefoundation.org)
  • The American Association for Higher Education houses an online e-portfolio clearinghouse for "alternative means" of assessing student learning (see http://ctl.du.edu/portfolioclearinghouse).
  • Alverno College has designed a web-based "Diagnostic Digital Portfolio," which collects key performances across students' undergraduate careers, allowing them to document, reflect on, and analyze their learning across time and across courses (see http://ddp.alverno.edu/).
  • The Computer Writing and Research Lab at the University of Texas-Austin has designed the Learning Record, which allows students and instructors to reflect and comment on a collection of work samples in and across courses. Created as an alternative to standardized tests, the Learning Record allows students to document their work in five dimensions: confidence and independence, knowledge and understanding, skills and strategies, use of prior knowledge and experience, and critical reflection (see http://www.cwrl.utexas.edu/%7Esyverson/olr/contents.html (for more information on the K-12 Learning Record assessment system, see http://www.fairtest.org/Learning_Record_Home.html).
    These projects and programs belie the perception that U.S. higher education does not care about student learning. They are models of effective assessment-as-learning and they provide rich information about that learning. But they are a far cry from the kind of standardized assessments that would allow the creation of the institutional ranking system that Secretary Spellings wants.
Will Secretary Spellings ultimately get what she wants? Perhaps the answer to this question lies in whether postsecondary educators will follow the lead of their colleagues above and claim assessment as an instructional tool before it is wielded against them-as it has been against their K-12 colleagues-as a policy weapon.

 

Chris W. Gallagher is Associate Professor of English at the University of Nebraska-Lincoln, where he coordinates the writing program and teaches courses in writing, rhetoric, literacy, and teaching. He is the author, most recently, of Reclaiming Assessment: A Better Alternative to the Accountability Agenda (Heinemann, 2007). His earlier work includes Radical Departures: Composition and Progressive Pedagogy (NCTE, 2002) and articles in several journals and magazines, including Phi Delta Kappan. Since 2001, Gallagher has served as Coordinator of the Comprehensive Evaluation of Nebraska's School-based, Teacher-led Assessment and Reporting System (STARS).

 

REFERENCES

Baer, Justin D., Andrea L. Cook, and Stephane Baldi. 2006. The Literacy of America's College Students. The American Institutes for Research. January. Available at http://www.air.org/news/documents

 

Dwyer, Carol S., Catherin M. Millett, & David G. Payne. A Culture of Evidence: Postsecondary Assessment and Learning Outcomes. Educational Testing Service. June 2006. Available at http://www.ets.org/Media/Resources_For/Policy_Makers/pdf/cultureofevidence.pdf

 

Feller, Ben. 2006. "Study: College Students Not Literate Enough for Complex Tasks." The AP State and Local Wire. January 19. LexisNexis. U. of Nebraska Lib., Lincoln, 2 Oct. 2006. http://lexisnexis.com/.

 

Hersh, Richard H. "What Does College Teach?" The Atlantic Monthly. November 2005. Available at http://www.theatlantic.com/doc/200511/measuring-college-quality.

 

Hickok, Eugene. "No Undergraduate Left Behind." New York Times. 11 October 2006. A27. Indiana's Education Roundtable. "Indiana's P-16 Plan for Improving Student Achievement." October 28, 2003. Available at http://www.edroundtable.state.in.us/pdf/P16/P-16plan.pdf.

 

Kutner, Mark, Elizabeth Greenberg, and Justin Baer. 2005. A First Look at the Literacy of America's Adults in the Twenty-First Century. National Assessment of Adult Literacy. December. Available at http://nces.ed.gov/NAAL/PDF/2006470.PDF.

 

McPherson, Peter and David Shulenburger. Toward a Public Universities and Colleges Voluntary System of Accountability for Undergraduate Education (VSA). NASULGC/AASCU Report. Available at http://www.nasulgc.org/vsa-8-31-06%20_7_%20_2_.pdf.