As a public high school English teacher in Chicago years ago, Sarah Levine often felt ambivalent about the way she taught literature in the classroom.
She wanted her students to develop a lifelong love of reading – to feel moved by what they read, to explore fundamental questions about the human experience. But she found herself focusing more on the “clinical” or technical aspects of literature, like analyzing the effects of a particular motif or how a character is developed – skills that were likely to be tested on statewide exams at the end of the year.
“There’s room for that, but I’d go back and forth about how much time I should be spending on it,” said Levine, now an assistant professor of education at Stanford who researches the way high school students learn literary interpretation and writing, particularly in high-poverty urban areas.
Knowing that standardized tests drive what happens in the classroom, she wondered: What have these exams said over time about what we value in teaching literature? How have they reflected trends in teaching and culture more generally? And ultimately, if the tests position technical skills above emotional engagement or personal reflection, what does that say about the kind of readers we want students to become?
“Do we want them to read like literary critics,” she wanted to know, “or more like regular people who enjoy literature?”
A close look at test questions
Levine recently set out to analyze standardized tests of literary reading from the past 100 years, aiming to see what test-makers and teachers have historically determined is worth learning in high school English classes. She’ll discuss her research at a talk on May 22, “A Hundred Years of Tone and Mood: How Standardized Tests Have Asked Students to Read Literature,” part of a series organized by the Stanford Center for Opportunity Policy in Education (SCOPE).
Levine chose to focus on one particular exam over time for consistency: the New York State Regents exam, which she found offered the fullest and most accessible archive. She obtained at least three Regents exams for each decade over the past century. The earliest test in her sample was from 1904.
With help from several undergraduate research assistants, she reviewed each exam to identify patterns in what the tests asked students to do. Together they coded the test questions into different categories, such as figuration (interpreting the literal meaning of a symbolic passage), word meaning (defining a term in context) and literary device (naming or giving examples of a technique like personification or flashback).
Overall, she found, the tests’ approach hasn’t changed very much over the years: predominantly multiple choice, with questions geared toward assessing skills like vocabulary, recall and comprehension. But some questions she encountered illustrated less predictable ways test-makers have thought to ask about literature.
One essay question on a 1925 test, for instance, encouraged students to reflect on their own relationship to reading:
Q: Show that either fiction or drama, or both, may help us to understand and sympathize with people we should otherwise pass by without interest. Illustrate by instances in your own experience.
Another question, on a 1913 test, asked students to discuss why a poem might be more beautiful than a paragraph of prose expressing the same idea.
“I think these questions are useful because they value the text, the reader and the interpretive experience,” she said. “We’re responsible to the text, but we’re bringing our own experience.”
Multiple choice questions with one “right” answer might assess comprehension skills, she said, but obviously won’t encourage students to develop their own understanding or consider how they’ve been affected by something they’ve read.
"You can’t ask complex interpretive questions in multiple choice format,” she said, adding that at best, these questions only ask students to get inside the head of a test-maker to make what’s likely to be a fairly conservative or conventional reading of the text.
“But if we don’t test it, do we teach it?” she asked. “And there’s our problem.”
Literary critics or lifelong readers
Levine hopes her research will encourage teachers to pursue questions and activities that engage students more deeply in the experience of reading literature. She also hopes it will help persuade test-makers to move away from multiple choice questions about literary interpretation. “Is it time to say they just won’t show us much of value?” she asked.
It all comes back to Levine’s larger question: whether high school English classes should be teaching students to become literary critics or lifelong, passionate readers. “In history classes, we want to teach kids to think like historians,” she said. “In science, we want them to think like scientists. In English, what do we want?”