'What Works' in Reading Comprehension--And What Doesn't
The divide between reading researchers and cognitive scientists has led to a lot of unnecessary complexity--and student failure
Research in education rarely draws on the substantial body of scientific evidence on how learning works. That’s a problem for teachers, students—and the rest of us.
Researchers and academics focus on their own fields because that’s what they know best. But if we don’t break down the silos separating education from cognitive science—the study of how we learn—we’re unlikely to make progress in helping kids succeed academically and acquire the knowledge they need to become responsible, productive members of a democratic society.
A recent case in point is a “practice guide” issued by an arm of the federal government called the What Works Clearinghouse. It focuses on “interventions”—tutoring or supplemental classes—for struggling readers in grades four through nine. There are way too many such readers. Only about a third of students in fourth and eighth grade read at or above the “proficient” level, according to national tests.
The What Works practice guides, issued periodically on various subjects, translate the findings of academic papers into recommendations for teachers. Panels of experts search for studies, dig into those that meet rigorous standards, and come up with advice based on that evidence. All of this seems to make total sense.
But some experts have raised questions about the appropriateness of “meta-analyses” like these, because they take a bunch of studies that differ in significant ways and come up with an average “effect size,” the measure of how well an intervention worked.
More specifically, when it comes to “reading,” a basic problem is that three different things are lumped under one label: recognizing or sounding out words; reading them with fluency; and understanding what they mean. The practice guide includes recommendations for all of them.
With word-recognition—and to a lesser extent, fluency—it makes sense to limit the analysis to studies in the field of reading. Learning to sound out, or “decode,” words involves a finite set of skills that, when practiced in a systematic way, usually lead to success.
But reading comprehension is different. It’s not just a reading process. It’s inextricably connected to the process of learning in general. And cognitive scientists have found that the key factor in learning new information is how much relevant information you already have.
That’s because the aspect of our consciousness that takes in new information, our “working memory,” is easily overwhelmed. Until we become fluent readers, we have to juggle things in working memory like how to decode unfamiliar words and where to put the emphasis in sentences, in addition to the new information in the text. The more information we have in long-term memory that’s relevant to the text—whether that’s knowledge of the topic or general academic vocabulary, or both—the more capacity we have in working memory to understand and retain new information.
Problems with the panel’s recommendations
But little of that evidence from cognitive science is reflected in the practice guide, or in the research the panel surveyed. There’s a nod to “building word and world knowledge,” but that turns out to mean brief, one-time explanations of unfamiliar terms just before students read a text. That may help kids wrest meaning from the passage at hand in the moment, but unless they hear those words again—repeatedly, over a period of weeks, in different contexts—the knowledge is unlikely to stick and enable them to become better overall readers.
One of the panel’s recommendations is to “routinely use a set of comprehension-building practices to help students make sense of text.” That’s nothing new. It’s what students are getting in their regular reading classes, almost all of which spend many hours a day on isolated comprehension “skills and strategies,” to little discernible effect. And especially in the last 20 years, reading classes have taken over so much of the schedule in most elementary schools that they’ve pushed out every other subject except math.
One old chestnut is the “skill” of “finding the main idea,” to be practiced on texts on a random variety of topics. The What Works panel initially presents “generating gist statements” as though it were something novel, but elsewhere they’re more forthright. “Gist is another word for main idea,” the guide advises teachers to tell students.
The theory has long been that students just need to practice this skill and get better at it. But it’s often either so obvious that students won’t need much practice—just look at the title and circle the words that appear in the text frequently, as the guide recommends—or too difficult to accomplish without a base of relevant prior knowledge.
For example, with a passage on the American Revolution, teachers are advised to define the words conflict, excessive, and local before students start reading. But what if students also don’t know the words taxation, representation, protests, and colonists? What if they’re not sure what the American Revolution was? Due to the sad state of history instruction, many kids are probably in that position.
The guide acknowledges that texts on unfamiliar topics are “more challenging,” but their advice is simply to “continue to ask students to generate gist statements so they can continue to work the skill with harder and harder text.” And one broad recommendation is to give students opportunities to “practice making sense of” challenging text. But the guide also mentions that if students lack the knowledge to figure out unfamiliar words from context, teachers should switch to texts that have more words students know. It’s not clear how to reconcile these apparently conflicting recommendations.
It's not that the practices the guide recommends are ineffective, per se. They’re only ineffective when they’re viewed as ends in themselves—skills that can be acquired in the abstract and applied generally. If they’re used in service of helping students understand the content of a coherent, knowledge-building curriculum—which unfortunately isn’t the kind used in most schools—they can be quite effective, particularly when combined with carefully sequenced writing instruction.
But you won’t find anything about coherent curriculum in the practice guide either. While there are a couple of references to connecting reading passages to the content in students’ regular classes, teachers are also advised to vary the topics. That advice is reinforced by sample passages whose subjects range from Gandhi to King Tut to seabirds to the War on Poverty.
To be fair to reading researchers, cognitive scientists themselves don’t always agree, which can make it hard to know which evidence to rely on. A recent article in the New York Times drew on the work of one cognitive scientist to argue that it’s best to let students “struggle” with a problem before providing direct instruction. Other well-regarded cognitive scientists have come to the opposite conclusion. (A previous What Works guide, issued in 2007 and compiled by a panel that consisted mostly of cognitive psychologists, recommended several practices supported by cognitive science. But it didn’t mention reading comprehension, and none of its recommendations seem to have penetrated the practice guides on that topic.)
A simpler and far more effective way to boost comprehension?
Even if reading comprehension experts want to confine themselves to research in “reading,” they would do well to broaden their focus. It’s harder to measure the effects of a knowledge-building curriculum as compared to a brief, targeted intervention, and effect sizes are generally smaller when the intervention is an entire curriculum. But some evidence that knowledge-building curricula can significantly boost comprehension is beginning to emerge.
Not to mention a couple of recent, more narrowly focused studies in England that produced eye-popping results. Teachers read aloud novels that exceeded students’ own reading abilities for about 30 minutes a day, leading class discussions at the end of each chapter, for 12 weeks. In an initial experiment with 12- and 13-year-olds, poor readers made 16 months of progress in that time, as measured by a standardized reading comprehension test.
When the same thing was tried with the equivalent of first- and third-graders in a school that served many struggling readers, around 90% of all students made more than six months of progress. (The experiment is apparently unpublished but is described in two videos, here and here.) In the third-grade class, 78% made more than two years of progress. When the experiment was extended to all elementary grades in the school, the average improvement was 15 months. (In this second study, children followed along in the text, using their own copies as teachers read the novels aloud.)
Plus, the kids enjoyed this “intervention” far more than they probably would the tedious-sounding comprehension “routines” described in the practice guide. “You can’t measure how much joy the children got,” a teacher involved in the study said in one of the videos. “To be valued, to be told, yeah, you can read this, you can enjoy this.”
Just reading novels won’t provide all the knowledge kids need to understand complex text in areas like history and science. But if we spent only half an hour a day reading novels aloud—instead of the two or three hours many schools spend on reading comprehension skills—we’d have lots more time in the schedule for history and science.
I don’t know if the studies of “just reading” meet the exacting “What Works” standards. But if not, it certainly sounds like reading researchers should try conducting similar studies that do—and they should do that as soon as possible.
This post originally appeared on Forbes.com.
Can you provide a reference or more information on the study conducted on first- and third-graders? The link to Mary Myatt's website no longer works and I would like to explore this study, to see what I can take from it in the primary setting.
How is comprehension being assessed? Multiple choice questions? Summarizing? It matters when analyzing studies.