Can you test what colleges teach? Academics are appalled that the government wants to try
U.S. News and World Report, March 12, 2007
By Alex Kingsbury
In his autobiography, The Education of Henry Adams, the grandson of the sixth president delivered the American school system one of its most memorable intellectual smackdowns. His treatise on the value of experiential learning concluded that his alma mater, Harvard University, “as far as it educated at all … sent young men into the world with all they needed to make respectable citizens. Leaders of men it never tried to make.” His schooling, replete with drunken revelry and privileged classmates, didn’t prepare him for a world of radical change: the birth of radio, X-rays, automobiles. “[Harvard] taught little,” he said, “and that little, ill.”
Today’s undergraduate education, of course, is far more than just the canon of classics that Adams studied. And with heavy investments in technology, it’s hard to argue that colleges don’t prepare students for the job market or the emerging digital world. But the question remains: What should a student learn in college? And whatever that is, which colleges teach it most effectively? With the average cost of private college soaring—and with studies consistently showing American students falling behind their peers internationally—it’s a question being asked more and more. And it’s one that colleges are at a loss to fully answer. “Every college tries to do what it says in the brochures: ‘to help students reach their full potential,'” says Derek Bok, former Harvard president and the author of Our Underachieving Colleges. But, he says, “most schools don’t know what that means. Nor do they know who is failing to achieve that full potential.”
It’s called “value added,” an elusive measurement of the thinking skills and the body of knowledge that students acquire between their freshman and senior years. In other words, how much smarter are students when they leave college than when they got there? Trying to quantify that value—and assessing how effective each of the nation’s 4,200 colleges is at delivering it—is at the heart of one of the most ambitious and controversial higher-education reforms in recent history.
Later this month, U.S. Secretary of Education Margaret Spellings will meet with college leaders to discuss the findings of her Commission on the Future of Higher Education and its plan to assess college learning through one or a number of standardized tests. “For years the colleges in this country have said, ‘We’re the best in the world; give us money and leave us alone,'” says Charles Miller, the chairman of the commission. “The higher-ed community needs to fess up to the public’s concerns.”
Along with the parents footing the bills, the federal government has a vested interest in knowing how the nation’s colleges are doing their jobs. Although the government provides only 10 percent of the funding for all K-12 schools, it is responsible for 24 percent of all money spent on higher education. Despite this inflow of public money, colleges have largely escaped the accountability movement that has been shaping policy and curricula in the early grades.
One size. Not surprisingly, colleges abhor the idea of government-imposed testing, insisting that they are reforming themselves and that government oversight is not the answer in any case. A one-size-fits-all solution is grossly impractical, they argue, given the variety of American colleges, and it undermines the prized independence of the institutions, widely regarded as among the finest in the world. “No one wants standardized No Child Left Behind-style testing in colleges—not parents, not students, not colleges,” says David Ward, president of the American Council of Education. Adds Lloyd Thacker, author of College Unranked: Ending the College Admissions Frenzy, “The danger is that the soul of education will be crushed in the rush to quantify the unquantifiable.”
A combination of factors has prompted the government to rethink its historically hands-off policy toward higher education. They include a staggeringly high dropout rate, a perceived decline in international competitiveness, and sky-high tuitions. Nationwide, only 63 percent of entering freshmen will graduate from college within six years—and fewer than 50 percent of black and Hispanic freshmen will. And while degree holders have far greater earning power than nondegree holders, the students who incur debt only to drop out are often worse off than if they had never attended college in the first place.
And debts they have. A year of tuition at Harvard cost Henry Adams $75, or nearly $1,750 in today’s dollars. Now, four years at a public in-state, four-year college costs $65,400, up more than 27 percent in the past five years. Four years at a private school costs more than $133,000. In the past 30 years, the average constant-dollar cost of a degree from a private school has more than doubled. So it’s hardly surprising that college students with loans graduate with an average of $19,000 in debt.
Yet an expensive degree does not necessarily a literate citizen make. In 2003, the government surveyed college graduates to test how well they could read texts and draw inferences. Only 31 percent were able to complete these basic tasks at a proficient level, down from 40 percent a decade earlier. Fewer than half of all college students, other studies show, graduate with broad proficiency in math and reading. And, according to Bok, evidence suggests that several groups of college students, particularly blacks and Hispanics, consistently underperform levels expected of them given their SAT scores and high school grades.
It is just these sorts of reports that have triggered the government’s demands for greater accountability. “It was always assumed that higher education knew what it was doing,” says John Simpson, president of the University at Buffalo-SUNY. “Now, the government wants provable results.”
There are currently two major tools used to measure student learning in college. The Collegiate Learning Assessment, administered to freshmen and seniors, measures critical thinking and analytical reasoning. About 120 schools use it—though nearly all keep the results confidential. Hundreds of schools also administer the National Survey of Student Engagement, which tracks how much time students spend on educational and other activities—a proxy for value added. Colleges have also made efforts to monitor student satisfaction, faculty effectiveness, and best classroom practices. The problem is, schools largely keep these results from the public.
Many graduate programs require standardized tests for admission, from the Graduate Record Exam to the more specialized tests for law, medicine, and business. So demonstrating a college’s effectiveness could be as simple a matter as tabulating its graduates’ pass rates on those exams. But many colleges have no way to determine if their graduates take these exams or how well they score. Nor, colleges argue, can they easily and comprehensively monitor starting salary, graduate school acceptance, or years spent in debt. This is despite the prodigious data-gathering capabilities of the fundraisers in the alumni office.
Common knowledge. One of the major hurdles for measuring value added is agreeing on what students should learn.
Should a philosophy major be proficient in calculus? Should a physics major be able to conjugate French verbs? A study of hundreds of students at the University of Washington suggests that measuring success within disciplines might be the way forward instead. “We found that learning outcomes were highly dependent on a student’s major,” says Catharine Beyer, who has compiled the results of that research into a book to be published this spring. “A chemistry student will learn something very different about writing than a philosophy major. That’s why standardized tests across institutions are too simplistic to determine what learning takes place.”
Others contend that a myopic focus on testing is simply the wrong way to think about learning. Peter Ewell, vice president at the National Center for Higher Education Management Systems, says that alternative assessments, like portfolios of student work or senior-year capstone projects, can be effective yardsticks for gauging progress. Ball State University in Muncie, Ind., for instance, requires that all students must pass a writing test in order to graduate; in two hours, students must produce a three-page expository essay. In several majors, including architecture and education, students must maintain an electronic portfolio of their work.
In the next five years, Ball State will also give all students the opportunity to participate in an “immersive learning project,” in which they solve a real-world problem. One recent class, for example, produced a DVD about the American legal system for the local Hispanic communities. “The limitation of the Spellings commission is that they only think about universities in terms of the classroom,” says Jo Ann Gora, Ball State’s president. “We see our educational mission in much broader terms, including community involvement that is not easy to quantify with a test.”
To a large degree, schools already are held accountable for their performance. It happens through the accreditation process, in which an independent panel reviews the operation of an institution and gives its official blessing. When the process started, there were fewer colleges and far fewer federal dollars at stake. But now, with federal student loans contingent on a school’s credentials, a loss of accreditation could put a college out of business. Thus, accreditors are reluctant to fail schools, preferring instead to issue warnings and encourage improvement. Accreditors meeting in Washington recently also confessed that some were reluctant to shutter schools that are “failing in the numerical sense” because those institutions were serving students who otherwise might not have options.
Freeze. But if the feds have their way, that sort of attitude may change. The Department of Education recently made an example out of the American Academy for Liberal Education, a minor accrediting agency, by freezing its authority for six months for—among other things—failing to clearly measure student achievement. It was an indication of how quickly the government is moving to implement the recommendations of the commission. “We’re not just going to sit around and study this,” says Cheryl Oldham, the commission’s executive director. “We’re going to begin to correct the problems.”
Another key resource for evaluating schools is, of course, college rankings—the Best Colleges list by U.S. News in particular. College rankings have been blamed for all manner of ills, from runaway tuition costs to unhealthy adolescent stress. But chief among critics’ complaints is that U.S. News relies more on “inputs” such as SAT scores and the high school class ranks of admittees than “outputs” of the sort that Spellings wants to measure.
“U.S. News rankings heavily weight the wealth of a school, through things like spending per student, rather than how much a student learns,” says Kevin Carey, a researcher at the nonpartisan think tank Education Sector.
Unless colleges release them, U.S. News does not have access to such data. But if such measures were incorporated, the rankings could change. Florida, for example, makes data about student learning public, often with surprising results. The average student at the University of Florida, for example, has SAT scores a full 100 points higher than those at Florida International University. There are fewer full-time faculty members at FIU, and only 4 percent of alumni donate money back to the school, compared with 18 percent of University of Florida grads. Those are just two reasons that the University of Florida ranks higher than FIU in the U.S. News list. Yet the average earnings of FIU grads—only one measure, to be sure—are significantly higher than those of their University of Florida counterparts.
The state of Texas also requires its public colleges to release more data. In a recent report, the state announced that the tiny University of Texas of the Permian Basin in Odessa far outperformed the larger UT campuses in El Paso and Dallas on the Collegiate Learning Assessment. What’s more, Permian Basin also had a greater percentage of students either employed or enrolled in a graduate program within a year after graduation for every year between 2001 and 2004, when compared with its counterparts in El Paso and Dallas.
These are the sorts of statistics students should consider when looking at colleges, guidance counselors say. In their absence, students look elsewhere for comparisons—to campus luxuries like room service or Jacuzzis, for instance, or to the success of a school’s sports teams. “Students will choose a college because of its party reputation or its campus facilities or how many times it’s been on ESPN, because they don’t have a lot of other meaningful information to base their choice on,” says Steve Goodman, an educational consultant and college counselor. The irony is that it’s often easier to find statistics about a college football running back than it is to find, say, the college’s expected graduation rate for black males from middle-class households.
Spellings, for her part, sees outcomes as inseparable from the college search process. She envisions a database on the Web where people can shop for a school the way they shop for a new car—an analogy that incenses academics to no end. (These critics also point out that the Department of Education already maintains such a website, though it is far from user-friendly.)
Acting now. Some schools are already taking the hint. The University of North Carolina recently announced that it was considering requiring the Collegiate Learning Assessment. The Kentucky and Wisconsin governments require that state schools prove learning outcomes. In Texas, in addition to the testing it already mandates, Gov. Rick Perry has proposed a college exit exam. The Arizona State University system has moved to give individual deans more power to require learning assessments. And businesses are lining up to provide the tools to do it. “Employers, governments, and parents want to know what they are paying for,” says Catherine Burdt of the educational research firm Eduventures. As the college going population includes more part-time and older students, studies show, the demand for measuring learning outcomes will only increase.
In a few weeks, colleges will hear how Spellings intends to move forward. Colleges, meanwhile, continue to search for that elusive value-added measure, which, however flawed, can lead to better teaching.
“We should not be afraid of a culture of self-scrutiny on campus, but only the faculty can create a culture of learning,” says Bok, who is wary of a federally imposed solution. “Those who say it’s impossible to quantify a college education are not being honest or they are dissembling. All the things you learn can’t be counted, but some can. We need to get more schools interested in examining their own successes and shortcomings.”
That might be something Spellings could support—provided that the colleges publish the results.
© 1999-2025 Steven Roy Goodman and TopColleges.com
Strategies for Acing the College and Graduate School Admissions Process
Phone/WhatsApp: (202) 986-9431 | Email: steve@topcolleges.com
3554 Appleton Street NW, Washington, DC 20008
Skype: gotouniversity | YouTube: Steven Roy Goodman