Is It Time to Topple the "Five Pillars" of Literacy?
An decades-old report on reading instruction has led to serious misunderstandings about comprehension.
A simplified version of a decades-old report on reading instruction is routinely used to define the “science of reading.” For reading comprehension, that’s a misinterpretation that has condemned millions of children to functional illiteracy.
Tis the season to look back on major events of 2022. But I’m going to look all the way back to 2000, when the report in question was released, because it’s had a huge effect on efforts to address the literacy crisis this year. And my wish for 2023—although, alas, not my expectation—is that we’ll finally get away from it.
I’m talking about the National Reading Panel (NRP) report, and specifically about the now ubiquitous infographic that someone derived from it: the “five pillars of early literacy,” defined as phonemic awareness, phonics, fluency, vocabulary, and comprehension.
When the NRP report came out, many educators objected to its endorsement of systematic instruction in phonemic awareness (the ability to say and hear individual sounds in words) and phonics (the ability to connect those sounds to the letters that represent them). They claimed the panel was biased and driven by ideology, and some still raise those arguments. To be clear, that’s not my complaint. I’m convinced that the panel’s findings on those aspects of literacy rest on solid ground.
Few, however, have objected to the NRP’s incomplete and misleading findings on comprehension. Maybe that’s because it was endorsing an idea that many educators already embraced: teaching “strategies” designed to enable students to make meaning from text. The panel found evidence, for example, that having students summarize a text helped them understand it.
I’m not challenging that evidence. But it’s important to understand that the question the NRP tried to answer wasn’t “What’s the best way to help students comprehend what they read?” It was much narrower: “Does comprehension strategy instruction improve reading? If so, how is this instruction best provided?” Using rigorous criteria, it reviewed about 200 studies on comprehension strategy instruction and found convincing evidence for eight kinds.
Two problems with the NRP’s comprehension findings
That might sound unobjectionable. But there are two basic problems. One is that a lot of other things are necessary for comprehension besides strategy instruction. The second is that even if you accept the NRP’s narrow focus, most of what goes on in the vast majority of American classrooms has nothing to do with what it endorsed.
Let’s start with problem number two. Here are some ways classroom practice diverges from the report’s findings:
The studies reviewed by the NRP lasted just a few weeks. But American elementary schools spend an average of two hours a day on reading, year after year, and most of that time is devoted to comprehension skills and strategies.
NRP did not endorse most of the “skills and strategies” taught in classrooms, like “finding the main idea.”
The NRP found the most support for “multiple strategy instruction,” geared to what works for a specific text. But American classrooms typically focus on one skill per week, using texts that lend themselves to teaching that skill.
What factors got left out of the 446-page report? The two biggies are background knowledge—either of the topic or of academic knowledge and vocabulary in general—and familiarity with complex syntax. It’s clear from the evidence that if you’re seriously unfamiliar with one or both of those things, you may be able to understand a simple story but you’re going to struggle with more complex text. That’s what happens to most American students when they reach higher grade levels.
The NRP didn’t explicitly say strategy instruction is all that’s needed for comprehension. But that’s what the report has been interpreted to mean. Nor did they conclude that teaching strategies should only be taught as part of the “reading block.” In fact, they said there was evidence that strategy instruction worked well in subjects like social studies—just not enough evidence for them to draw any conclusions.
But that point has fallen by the wayside as reading blocks have ballooned in a futile attempt to raise reading scores. At the same time, schools have marginalized or eliminated social studies, even though evidence suggests that more time on social studies can increase reading scores more than additional time on reading.
“Comprehension” has taken over the elementary schedule
The NRP didn’t invent comprehension strategy instruction, but it helped enshrine it as the way to “teach” comprehension. While many education school professors have resisted the NRP’s findings on phonics, they’ve embraced comprehension “skills and strategies.” The percentage of teacher preparation programs that included “comprehension” in their curricula zoomed from just 15% in 2006 to 75% by 2016.
What really gave comprehension instruction a boost, though, was a program that was part of the federal No Child Left Behind legislation called Reading First, in the early 2000s. It basically said to states and school districts: if you want federal funds, you need to adopt reading programs that cover the five components of early literacy—as defined by the NRP report. That led to more systematic phonics instruction, but only for a while. The lasting legacy of Reading First is a significant increase in the time schools spend on reading—and particularly on comprehension “skills and strategies.”
Now, many well-meaning educators and officials who want to revamp reading instruction to align with science mistakenly believe that if they just add “more of the phonics,” as one school board official put it, they can leave the rest of the current program—comprehension skills instruction—in place. They want to avoid “a situation where you could throw the baby out with the bathwater,” the official added. But that baby is a big part of the problem.
Was the NRP necessary?
One question is why we needed the NRP at all. The federal government had already appointed an equally prestigious panel on reading instruction—under the auspices of the National Research Council (NRC) rather than the National Institutes of Health. Two years before the NRP report was published, it produced a report that was just as voluminous.
The earlier panel came to basically the same conclusions as the NRP on teaching phonemic awareness and phonics. And on comprehension, its authors similarly endorsed certain kinds of strategy instruction.
But they also discussed the role of knowledge in comprehension and what teachers could do to foster it. “Beginning in the earliest grades,” the report said, “instruction should promote comprehension by actively building linguistic and conceptual knowledge in a rich variety of domains, as well as through direct instruction about comprehension strategies.” (Emphasis added.) And it noted that one successful approach to strategy instruction used “text that is thematically related so that children have the opportunity to build their knowledge of a topic or area over time.” These points are important, and—as far as I can tell—neither appears in the NRP report.
According to the NRP itself, a second panel was needed because the NRC committee didn’t “address ‘how’ critical reading skills are most effectively taught and what instructional methods, materials, and approaches are most beneficial for students of varying abilities.” But at least two supporters of the “science of reading” have pointed out that the NRP didn’t do that either. The late Robert Slavin noted that the NRP described “essential elements of curriculum but not of instruction.” Mark Seidenberg has gone even further, arguing that the NRP’s findings don’t even provide “a sufficient basis for designing an effective reading curriculum.”
If there had never been an NRP, would the NRC report have come to define the “science of reading”? And would it have made a difference to the way schools try to teach comprehension? That’s impossible to know. The NRC report didn’t mention knowledge in its bullet points, so maybe an infographic based on its findings would have led to the same harmful overemphasis on comprehension skills we have now.
Yet another problem with both reports is that they’re over 20 years old. There’s been more significant research since then, particularly on whether elementary literacy curricula that aim to build knowledge can boost reading comprehension. We still need more data, but so far the results are promising. Maybe we need a new report, and a new infographic?
But it’s possible that any infographic would lead to misunderstandings about literacy instruction, given the complexity of the subject. The virtue of infographics is their simplicity, and some things may just be too complicated to be portrayed in one.
Aside from reducing comprehension to strategies, the “five pillars” image treats fundamentally different aspects of literacy as though they were similar—something that’s been pointed out by reading experts Mark Seidenberg and Hugh Catts. It also makes it look like the listed components are entirely separate, when in fact they inevitably interact.
But could we at least stop defining the science of reading using an image or list that is inherently misleading? It’s been suggested, for example, that we need a sixth “pillar,” knowledge. That’s not ideal, but maybe it would help alert educators, the media, and the general public to the fact that the way we teach—or fail to teach—phonics isn’t the only thing seriously wrong with our current approach to literacy instruction.
This post originally appeared on Forbes.com.
I'd make two comments:
- I think the NRP report does in part takes vocabulary as a proxy for the type of background knowledge you present ('knowledge' is a slippy term in research - given there are so many different types!). It is quite common for them to be used interchangeably, though it is fair to say vocabulary may be a narrow conception of background knowledge.
- The key issue is that the NRP had to include high quality experimental studies on knowledge building curricula leading to reading comprehension gains, but it is an area that doesn't have a lot of good empirical evidence. Cabell and Hwang (2020) put it really well: "Well-established theoretical models and a body of empirical research elucidate the critical role of content knowledge in comprehending texts. However, the potential of supporting knowledge in service of enhancing linguistic and reading comprehension has been a relatively neglected topic in the science of reading."
Of course, any significant research finding from two decades ago should be updated and our views and insights should evolve. I suspect a systematic review today would throw up similar challenges re: 'knowledge building/knowledge rich' curricula and instruction. We still don't know a great deal how operationalise knowledge building in the curriculum to consistently improve reading outcomes, though we recognise the vital role of declarative knowledge and vocabulary to comprehension.
It takes a model or, to me, a heuristic to be able to reduce complex bodies of knowledge and/or theories in a way to be understood or at least serve as an entry point to a profession--or at least serve as an argument or counter argument. I believe the rope and/or the simple view served such a purpose. To reduce some (but not all) of the findings of the NRC and the NRP into something that could serve to (a) counteract the widespread cult so to speak of whole language at the time--which indeed morphed into balanced as a result, and (b) serve as an organizer for persons trying to articulate a more scientific perspective. Of course, the positive organizing/communicative features of models/heuristics also have negatives, like oversimplification and deification, which we all have seen as well. The so call "strands" are not at all "separate" and are highly correlated. As a measurement person and as an early contributor to DIBELS I cringed at efforts to develop measures to fit into each strand and thus into a "box" for each. I also cringed at instructional efforts to develop interventions to fit into a box for each strand, and as noted, "comprehension" is not one that fits easily into a box. Notably, neither does "fluency." Fluency is NOT the same as "automaticity." Still, to me, it is not worth SMASHING a heuristic, but instead to promote a richer one for comprehension. Joe Torgesen hit me with his long ago that was so RICH in communicating the complexity, that includes factors like Reading Skills, Language Skills, Knowledge, and Meta-Cognitive, all with subcomponents like...motivation and interest! Life experience! Fix up skills! Vocabulary. That heuristic, to me, is one that should be promoted (or a similar one) when one talks about comprehension because it identifies the things that can be taught--and the things that maybe can't be taught. How does a teacher get a 9th grader interested in the Odyssey? Doug Fisher knows some strategies I'll bet! Sorry for the long posting. I do believe past models/heuristics have/are served their purposes, but we need to know the advantages and disadvantages and not deify them and be aware of the dangers.