Discover more from Minding the Gap
Yes, There Is Evidence that Building Knowledge Boosts Reading Comprehension
It can be hard to measure the effects, but recent studies suggest they're powerful.
Studies show it’s easier to understand what you’re reading if you have knowledge of the topic, but some reading experts say there’s little evidence that building kids’ knowledge boosts reading comprehension. That may be changing.
Among scientists who study the learning process, it’s well established that having prior knowledge of a topic has a strong positive effect on reading comprehension. While there are few studies directly addressing that phenomenon, cognitive psychologist Daniel Willingham has said the effect is so well known that studies on other topics routinely control for it. There’s also evidence that scores on reading comprehension tests are highly correlated with general cultural knowledge.
And yet, if you look at schools in the U.S. and some other countries, you would never know that’s the case. Year after year, students get many hours of reading comprehension instruction that neither takes their prior knowledge into account nor tries to increase it. Students are given periodic tests to determine their reading levels based on passages on topics they may or may not know anything about. They’re then matched with books that have been assigned levels based on easily measurable factors like word length and sentence length.
These efforts to boost comprehension have the trappings of science, but they’re omitting what science has found to be a key factor: knowledge. (And in the case of individual reading levels, there’s apparently no scientific underpinning whatsoever.) The theory is that if students practice comprehension skills and strategies like “making inferences” on a random variety of books that are fairly easy for them to read, they’ll become better readers. But a reader’s ability to make an inference will vary depending on the topic and her knowledge of it, so it’s not a general skill that just gets better with practice, like playing tennis.
There is evidence that teaching certain comprehension strategies can have positive effects in the short term—most studies last only about six weeks. But there’s no reason to believe that doing it year after year boosts comprehension. Considering that only about a third of American students test at the proficient level or above on reading tests—a proportion that hasn’t changed in over twenty years—it seems that the skills-focused approach to comprehension must be leaving out something important. Like, perhaps, building kids’ academic knowledge and vocabulary as early and as much as possible, so that they’re more likely to understand whatever they’re expected to read.
But first: what do we really know about the effect that background knowledge has on children’s reading comprehension? There’s some more or less accidental evidence, like the correlation between more social studies and higher reading scores, or the inadvertent experiment that occurred when France abandoned its content-rich elementary curriculum.
A recent “critical review” of the more formal research, going back to 1950, provides a useful summary—with the caveat that different researchers used different measures of comprehension. The authors of the review based their findings on 23 studies of children ages six to twelve, most done in the 1980s and 1990s, that either used a “knowledge-building intervention” or looked at correlations between prior knowledge and reading performance.
· The more knowledge children had of the topic, the better their reading comprehension—across the board.
· Readers deemed to have “generally poor comprehension skills” were able to compensate for that if they had a lot of relevant background knowledge, especially when they were just asked to recall what they had read as opposed to making inferences about it.
· Readers who lacked knowledge of the topic understood a text better if it was well written. Interestingly, readers with a lot of knowledge who were generally poor readers did better when the text didn’t hang together as well, maybe because the effort of reading the text got them more engaged.
· Background knowledge was more important when the texts were expository rather than narrative.
But not all of these studies were what researchers consider the “gold standard”—randomized controlled trials, or RCTs. Such experiments call for dividing subjects into two comparable groups and then giving only one group some kind of “treatment,” with the other group acting as the “control.” Results are measured in standard deviations (see here for a better explanation of that term than I can provide). That kind of study indicates that the treatment—for instance, building kids’ knowledge—is probably what caused the result.
Another issue is that most of the studies in the review that involved building students’ knowledge did so for only a few hours. That’s a problem, especially when the measure of comprehension is a standardized test that covers topics unrelated to what children have actually learned. It can take years for students to acquire the critical mass of academic knowledge and vocabulary that will equip them to understand passages at “grade level,” regardless of whether they know much about the specific topic.
That’s probably why the few RCTs that have examined the connection between knowledge-building and comprehension have mostly yielded inconclusive or undramatic results. One study done in England and published in 2017 purported to be the “first ever RCT of the Core Knowledge curriculum,” referring to an approach associated with E.D. Hirsch, Jr., a longtime advocate of building children’s knowledge. The researchers found that after one year of using the curriculum with children ages seven to nine, there was no effect on comprehension overall as measured by a standardized test, and only a small positive effect for children from lower-income families—the students most likely to benefit, since they’re the least likely to pick up academic knowledge at home.
But the study actually didn’t actually involve one of the curricula developed by the Core Knowledge Foundation, and the researchers acknowledge high levels of attrition and poor implementation by teachers hampered by their own lack of background knowledge. One reading researcher, Sonia Cabell, has pointed out that the study doesn’t meet the standards set by a federal government clearinghouse for education research.
Cabell herself is leading an RCT of the Core Knowledge Foundation’s Core Knowledge Language Arts (CKLA) curriculum in early elementary classrooms. CKLA is designed to be implemented during the “literacy” or “language arts” block in elementary school, but instead of putting comprehension skills and strategies in the foreground it focuses on meaty topics, including some in social studies and science. It’s the oldest of several recently developed literacy curricula that take such an approach.
In an article, Cabell and a co-author discussed four other studies that have examined the effect of knowledge-building curricula or interventions on comprehension in the early grades. All found positive effects on measures of the vocabulary or knowledge that had actually been taught, and two also found the harder-to-achieve effects on standardized comprehension measures.
Cabell’s own study also found both kinds of effects, based on the first year of what is designed to be a multi-year study. After just one semester, she and her fellow researchers found significant gains in knowledge and vocabulary that kindergartners had been taught and on some standardized measures of vocabulary and knowledge.
In an interview on the Science of Reading podcast, Cabell said she hadn’t expected to see results like that until at least first grade. True, the effects weren’t apparent across all of the measures used, and what effects there were appear to be small—.06 and .12 of a standard deviation. But most RCTs of language interventions see effect sizes of only .01, she said. Given that context, she called the results “quite remarkable.” Unfortunately, the pandemic has temporarily derailed Cabell’s study, but eventually it will continue.
Meanwhile, another long-term, large-scale RCT examining curriculum based on the Core Knowledge Sequence may yield even more remarkable results. While it hasn’t been published yet, its findings were described by Hirsch in his most recent book, based on a talk given by the lead researcher, David Grissmer. According to Hirsch, Grissmer found that after three years, students in charter schools that had developed a curriculum based on the Core Knowledge framework exceeded the verbal achievement of a control group by more than half of a standard deviation. Among students from the lowest-income families, the difference was two thirds of a standard deviation.
To be sure, knowledge isn’t the only factor in reading comprehension. Familiarity with the conventions of written language, which is far more complex than most spoken language, is also crucial. And it’s important to ask kids to analyze what they’re learning, including by doing things like making inferences. But let’s hope that studies like Cabell’s and Grissmer’s—and someday, RCTs of other knowledge-building elementary curricula—will provide the evidence needed to put content rather than comprehension “skills” front and center in literacy instruction.
This post originally appeared on Forbes.com.