Dramatic New Evidence That Building Knowledge Can Boost Comprehension and Close Gaps
A long-term study found that a content-rich curriculum closed the test-score gap between low- and high-income students.
Building students’ general knowledge can lead to dramatic long-term improvements in reading comprehension, a new study suggests—casting serious doubt on standard teaching approaches.
A rigorous study involving more than 2,000 students has found that children who got a content-rich, knowledge-building curriculum for at least four years, beginning in kindergarten, significantly outperformed their peers on standardized reading comprehension tests. Students from low-income families made such dramatic gains that their performance on state tests equaled that of children from higher-income families.
To understand the significance of these findings, it’s important to have some background information. For at least the past 25 years, reading scores in the U.S. have been largely stagnant, with about two-thirds of students scoring below proficient on national tests. Gaps between students at the upper and lower ends of the socioeconomic spectrum have remained wide and by some estimates have grown significantly, despite massive efforts to narrow them.
In response to low scores, schools have intensified instruction in reading, which includes the ability to decipher individual words. Over the past few years there’s been a push to bring that aspect of reading instruction in line with scientific evidence showing the need for systematic instruction in phonics.
Reading Comprehension Instruction Focuses on Skills
But most of the time spent on reading is devoted to reading comprehension, which is what state and national tests purport to measure. The standard approach is to focus on comprehension skills and strategies, like “finding the main idea” of a text and “making inferences.” Often there is a skill of the week, which the teacher demonstrates using a book chosen not for its topic but for how well it lends itself to demonstrating the skill.
Then students practice the skill on other books—fiction, or nonfiction on random topics—that have been determined to match their individual reading levels. The goal is not for children to acquire any substantive knowledge but rather to master skills that will theoretically enable them to understand the complex texts they’ll be expected to read in the future.
Elementary schools devote a reported average of two hours a day to English language arts, or reading, with much less time allocated to social studies, science, and the arts. The assumption is that students don’t need to acquire much substantive knowledge until they reach higher grade levels.
This approach is so deeply entrenched that it has persisted despite its failure to produce gains in reading test scores. In the face of stubbornly low scores, the prescription has often been to double down on it.
Nevertheless, over the last several years, an increasing number of schools have shifted to elementary literacy curricula that systematically build children’s knowledge and vocabulary while also providing the kind of phonics instruction backed by science. But the trend towards knowledge-building hasn’t gained as much traction as the movement for systematic phonics.
One reason may be that we haven’t had strong experimental evidence for knowledge-building. We do have lots of evidence showing that readers who have relevant knowledge—either of the topic they’re reading about, or of general academic vocabulary—have better comprehension. That evidence supports the idea that students from higher-income families generally do better on reading tests because they’re better able to pick up academic knowledge outside of school. But it’s been harder to demonstrate that building knowledge leads to better comprehension.
The reason, researchers and others have suggested, is that it can take a long time for the results of knowledge-building to show up on the standardized tests used to measure reading comprehension. The passages on those tests are on random topics, and it can take years for kids to acquire the critical mass of vocabulary that will enable them to understand texts on topics they haven’t actually learned about. In the meantime, the tests may be failing to measure the valuable knowledge students are in the process of acquiring.
The Colorado Study
That brings us to the long-awaited multi-year study released last week, conducted by researchers at the University of Virginia. The experiment took advantage of the fact that Colorado has long had an unusual number of elementary schools that use a knowledge-building curriculum. Researchers focused on nine such schools in the state that have more applicants than seats, requiring them to conduct lotteries for kindergarten admission. That allowed researchers to compare a “treatment group”—children who got in through the lotteries—with a “control group” consisting of children who applied but didn’t get in.
The 688 children admitted through the lottery got a curriculum based on the Core Knowledge Sequence, which is similar to the Core Knowledge Language Arts (CKLA) curriculum. (At the time the study began, in 2009, that curriculum had not yet been developed.) Rather than putting comprehension skills in the foreground, the Sequence—like CKLA—immerses children in rich content in history, geography, science, and other subjects, largely through having teachers read texts aloud and lead class discussions. (The study was done independently of the Core Knowledge Foundation, which developed both CKLA and the Sequence. It was financed through a mix of public and private funding.)
The various knowledge-building curricula developed in recent years all cover different topics in different ways, but they all share Core Knowledge’s focus on content. Instead of jumping rapidly from one topic to another, students spend several weeks learning about each topic. They also read and write about the content covered in the core curriculum rather than random, unconnected topics. Previous studies have measured the results of some of these curricula, including CKLA, after one or two years and found a positive but modest effect.
Researchers conducting the Colorado study waited four years, until the children reached third grade—the first year state standardized tests are given—before measuring the results. They continued to look at test scores for those students, and their peers who failed to win the lottery, through sixth grade.
They found that the treatment group as a whole experienced significant “moderate” gains on reading tests in each grade compared to the control group. The term “moderate” here can be misleading; it refers to categories of “effect sizes” in statistics. In fact the gains were large enough that, if translated to American students as a whole, the U.S. would move up to a position among the top five countries on an international reading test given to fourth-graders. Currently, it ranks 15th out of the 58 countries that participate.
Gap-Closing Gains for Students From Low-Income Families
Breaking down the study’s results by income level leads to additional insights. Eight of the nine schools were located in middle- to high-income areas. Even though those children were presumably acquiring a fair amount of academic knowledge at home, they still benefited from acquiring knowledge at school. The effect size—which measures the difference between their performance and their respective control groups—was 0.445.
What does that mean? Again, the researchers characterize the effect as “moderate,” but it needs to be viewed in the context of effect sizes found for other interventions aimed at boosting reading comprehension. One meta-analysis of 82 studies of interventions for struggling readers—none of which involved systematically expanding knowledge—found an average effect size of 0.35. A federal government report assessing the effect of 24 studies of comprehension strategy instruction found an average effect size of just 0.10. That was considered sufficient for a panel of experts to recommend adopting the strategies studied, which included things like teaching students how to generate questions about what they were reading and make inferences.
In the Colorado study, the effect size for students at the one school in a low-income area was truly extraordinary: 1.299. They also got large boosts in math scores and on the state science test given to fifth-graders. In fact, their gains were so large that, according to the researchers, they eliminated the gaps on Colorado tests between students from low- and high-income families, in all three subjects.
You might have some questions about the study—for example:
How reliable is this data?
The study has not yet been peer-reviewed or published, but the data have been subjected to rigorous evaluation. According to one of the researchers who worked on the study, Daniel Willingham, the lead author—David Grissmer—is extremely careful in analyzing data.
How definitive is this study?
One study can never be definitive, although the researchers tried to eliminate potential sources of “bias” that might limit the applicability of the findings.
For example, all the schools in the study were charter schools, raising the possibility that the findings wouldn’t apply equally to other types of schools. But the researchers discount that possibility, observing that—outside of urban settings—charter schools don’t have a better track record than traditional public schools. All schools in the study were in suburban areas.
On the other hand, the fact that the one school in the study that served a low-income population was located in a suburb might be a limiting factor. The same results might not be found with a school in a low-income urban neighborhood.
As the researchers argue, the results of this study should prompt more research along the same lines: long-term studies, lasting at least four years, of the effects of knowledge-building at the elementary level. Long-term studies are expensive, which is one reason they’re rare, but they may be the only way to get reliable evidence of what actually works to boost reading comprehension.
Does this study mean that if we don’t start building kids’ knowledge in kindergarten, there’s nothing we can do for them?
This study doesn’t address what can be done for students at higher grade levels. It’s not impossible to build knowledge later on, but it is more difficult. By the time students reach middle school or, especially, high school, the curriculum assumes a lot of academic knowledge they may not have. Those gaps in background knowledge can make it difficult or impossible for them to understand the content they’re expected to learn. One thing that can help is to explicitly teach students to write about what they’re learning.
Does this mean we should just ignore the many studies showing positive effects from comprehension strategy instruction?
No, but we need to recognize a few things about them. One is that few if any of those studies have followed students long-term—most last only a few weeks—so we don’t know how long their effects continue. And those studies can’t be used to justify teaching comprehension skills in isolation year after year, which is the standard practice.
It’s also important to note that many commonly taught “skills and strategies” don’t actually have strong evidence behind them. And the evidence for strategy instruction is actually strongest when multiple strategies are taught simultaneously—especially if those strategies are appropriate to the particular text at hand.
Any effective knowledge-building curriculum will bring in strategies in that way, even if they’re not labeled as strategies. Students might be asked, for example, to predict what will happen next in a story or an account of a historical event, or they might be prompted to connect new information in a text to knowledge they’ve already acquired. Those are valuable teaching techniques, but they seem to work best when used to help students think analytically about specific content rather than being taught as free-floating skills.
The Colorado Study Should Lead to Changes in Practice
Even though this is just one study, it should be enough—when combined with the strong evidence that relevant knowledge is a key factor in comprehension—to spark a re-evaluation of the standard approach to reading comprehension.
It should also lead us to rethink how we measure academic progress. This study suggests—as have others—that relying solely on reading and math tests is a terrible idea.
One problem is that they fail to measure the knowledge students acquire through a knowledge-building curriculum. A bigger problem is that the importance attached to the tests is preventing most children from getting that kind of curriculum in the first place.
Reading tests provide a powerful incentive for educators to focus on the comprehension skills the tests purport to measure rather than the social studies and science topics kids need in order to understand the test passages—not to mention the complex texts they’ll be expected to tackle at higher grade levels and in life.
Schools within a state generally have the freedom to choose from a variety of curricula, making it impossible for states to develop tests grounded in the knowledge covered in any particular curriculum. But all states have social studies and science standards that specify content to be taught at each grade level. States could at least connect the passages on their reading tests to that content.
We do need more long-term studies like the Colorado one. But given all the evidence we have—both of the potential benefits of knowledge-building curriculum and the clear failings of the current approach—we can’t afford to wait until we have more data before taking action. We’ve already done enough damage to children’s future prospects—albeit with the best of intentions—and we can’t afford to prevent millions more students from reaching their full potential.
This post originally appeared on Forbes.com.
Sue Livingston
Finally a study that debunks the idea that reading is a skill and that teaching reading strategies is enough to improve this skill. This is a much needed paradigm shift that views knowledge acquisition, discussion and writing as key players in reading development and improvement. Your description of the study and its context were clearly written and thoroughly done. I will share it widely.
"For example, all the schools in the study were charter schools, raising the possibility that the findings wouldn’t apply equally to other types of schools. But the researchers discount that possibility, observing that—outside of urban settings—charter schools don’t have a better track record than traditional public schools. All schools in the study were in suburban areas."
This is an impressive study, and I look forward to hearing more about it. In the past, I've read articles that contextualize charter school performance by showing the generally greater amount of parent involvement. Do you think this could also be a factor in the student success documented here that may not fully translate to the public school setting?