It’s about that time of year: kids are heading back to school. Some will fire up iPads to follow presentations; others will scour the Internet to research essays, perhaps even share and discuss work over school-only social networks. And in the spring, some will take college entrance exams… picking up No. 2 pencils and carefully filling in the bubbles on Scantron sheets.
Which of the examples doesn’t fit in with the others?
It sounds like a test question, but it’s actually a concrete reality. As our classrooms become immersed in technology, the exams designed to measure what we learn in school are still remarkably similar to the ones our parents took when they were young.
Shouldn’t tests change with the times, too? After all, driving exams have evolved to reflect the increasing sophistication of cars — the road test for a Model T was very different from today’s increasingly computerized sedan. But the same isn’t true for college admissions tests. If they don’t adapt to the next generation’s “digital” abilities in an increasingly connected world, what are the alternatives?
Admission Tests, Old-School Style
College aptitude tests have a long, surprising history, coinciding with development of the IQ test in the early-1900s by French psychologist Alfred Binet. During World War I, the army turned to the burgeoning area of intelligence testing to identify officer candidates from the nearly two million recruits. And according to PBS, that wide-scale use of statistical evidence, whose goal was to choose the most intelligent people of society, laid the groundwork for entrance exams in higher education.
In 1926, the College Board built off the idea and administered its own version of the Army’s test, which became the SAT. A little more than a decade later, a group of elite northeastern colleges agreed to use it as a common admission metric, cementing its place among teenage rites of passage.
In 1959, the American College Testing, or ACT, emerged as a way to highlight the top academic talent in Iowa’s high schools. Designed to assess a student’s preparation for college, it measures a candidate’s knowledge of standard high-school curriculum, rather than raw intelligence.
Both exams have become critical elements in the college admissions process, with the shared goal of aiding universities to identify the most college-ready students from a growing pool of candidates.
The SAT, which largely remained unchanged over the years, has separate tests in mathematics, critical reading and writing, scored on a scale of 200 to 800. The ACT, meanwhile, use four separate tests on subject: English, mathematics, reading and science, with scores ranging from one to 36.
Regardless of the test, both are nationally recognized and play a role in determining not just who is accepted into universities, but also whether scholarships are awarded, and if an athlete, whether they quality to play collegiate sports.
Students use pencils to fill out antiquated Scantron sheets, oddly out of step with the digital transformation happening in classrooms across the country. In fact, educators today, often use social media to poll students on the causes of the Civil War, for example, or mobile technology to dissect 3-D frog together in real-time.
Admissions Get a Digital Overhaul
According to the New York Times, the ACT plans to produce something beyond a college admissions test, forgoing fill-in-the-bubble sheets for “more creative, hands-on questions.” The goal is to begin testing as early as the third grade, helping students, especially from low-income families, see the possibilities and importance of preparing for college.
According to USA Today, the ACT will be available digitally starting in 2015, allowing students to view results within minutes, and not weeks, after clicking “submit” on a PC, iPad or other digital device. The 215-question fill-in-the-bubble paper tests will still be available for those who prefer the traditional option, but offering both formats is part of a transition plan to make the ACT accessible while keeping students comfortable — so the test measures learning, not the ability to navigate a computer-driven exam.
The ACT will still gauge a student’s knowledge of classroom subjects, but the digital exam will pose visual types of problems, according to the New York Times, like free-response questions where conclusions are formed by manipulating on-screen images. You may see a graphing question that changes the pressure and temperature of a gas, for example, or experiment with computer-generated beakers and liquids to solve chemistry equations.
Several graduate admission exams, like the GMAT and the GRE, are going the electronic route as well, Brent Evans, an assistant professor who conducts quantitative research on higher education policy at Vanderbilt University, told me an interview. “There are several advantages to that kind of exam, such as getting a more precise measure of ability at the low and high ends of the performance distribution, and allowing for a more precise measure of ability with fewer questions and, therefore, a shorter exam.”
But one downside to digital exams, Evans explains, is the inability to go back to or skip around from a difficult question. Currently, each one must be answered as it is displayed, making the choice of how long to spend on a problem more important than on paper exams.
The SAT, meanwhile, is taking a more cautious approach to modernizing. Late last year, David Coleman, the newly appointed president of the SAT’s College Board, suggested the writing part of the exam may need improvement. “If you look at the way the SAT assessment is designed, when you write an essay — even if it’s an opinion piece — there’s no source information given to you,” he said in a keynote speech at the Brookings Institute. He added that students can write personal essays to show their creativity, but it doesn’t meet the demands of companies and colleges, since it’s neither precise nor analytical, and doesn’t draw upon evidence.
“I think there is good reason to think about a design of SAT where rather than kids just writing an essay,” he said. “There’s source material that they’re analyzing.”
Coleman also acknowledged the arcane vocabulary section of the SAT, famous for words most will never use outside of the test itself. A willingness to alter it may pave the way to measure other, more applicable skills.
“Why wouldn’t you have a body of language on the SAT that’s the words you most need to know and be ready to use again and again?” he said in the keynote, suggesting that replacing obtuse words with more practical ones like “transform,” “deliberate” and “hypothesis.”
Coleman’s ideas to re-tool the SAT won’t happen overnight, since College Board members need to approve any redesign, but the envisioned and controversial shake-up comes at a time when digital advances and standardized testing opponents are pushing for greater change to the entire admission process.
Why Use These Tests At All?
Robert Schaeffer, public education director of the National Center for Fair & Open Testing, isn’t impressed by the proposed redesigns. He isn’t a fan of admission tests, in general.
“Electronic exams have some advantages over their pencil-and-paper equivalents, particularly the capacity to immediately calculate and report test-takers’ scores,” Schaeffer told me over e-mail. “But simply adapting a low-quality exam for computerized delivery does not magically transform it into a better test.”
He points out that a digital overhaul to an already flawed test may be a costly mistake as well. “Large-scale electronic test administration requires a substantial investment in additional equipment, bandwidth and other infrastructure at a time when education budgets are severely constrained,” he said, adding that there is already an inexpensive, exact measure available. “Independent researchers have consistently found that high school curriculum strength and classroom grades — both reflecting a student’s academic accomplishment over an extended time period — are stronger predictors of college performance than test scores based on filling in bubbles.”
Schaeffer maintains that national testing — both electronic and old-fashioned — often disenfranchises those on the lower-end of the socio-economic scale. But he believes admissions offices are starting to see the light. “It is unquestionably true that more colleges and universities are eliminating requirements for applicants to submit ACT or SAT exam scores,” he said.
He doesn’t attribute digital learning as a primary cause for accelerating the “test-optional movement,” but if he is right and the SAT and ACT are unable to respond sufficiently, a shift may create opportunities for students to showcase their learning in different ways — such as “massive open online courses,” dubbed MOOCs, which are a nascent but growing option.
What Else Is Out There?
Open to anyone with an Internet connection, MOOCs are usually, but not always, free and draw thousands of students, especially from outside the U.S. Classes involve self-paced learning and include discussion boards and assessments. Grades are determined by an instructor, peers or software.
MOOCs often serve as supplemental study material for co-eds in universities, but they’re also playing a role in college admissions. It is unclear how persuasive MOOCs are to admission officers, but completing the courses and putting it on an application can show a considerable amount of intellectual interest on a student’s part.
“Anecdotally, many high achieving high school students are taking MOOCs from prestigious universities and reporting that on their college admission applications,” Evans said. “Imagine a student applying to Penn having already taken EdX courses from Harvard and MIT and Coursera classes from Penn and Stanford.”
College admissions may view MOOCs much the same way they consider Advanced Placement and SAT II exams and International Baccalaureate programs, which test students on material related to specific subjects studied in high school. Colleges often grant credit in subjects with high scores on AP and IB test taken in high school, and they can help a student stand out among the throngs of candidates. Universities, unfortunately, make admission decisions by the end of March, long before scores are often available at the end of the senior year.
“Though predicting the future is always risky, it seems likely that “blended” education, in which young people enroll in a mix of in-person and online courses — both in high school and college — seems a likely direction,” Schaeffer said. “That means higher education leaders will need to develop new systems for evaluating applications, awarding credit, and determining degree requirements.” ♦