12 Comments
User's avatar
Carla Shaw's avatar

This highlights how complex the reading debate still is.

It’s easy for discussions to drift into a false choice between strategies and knowledge, when in reality comprehension depends on both. Strategies can help students engage with a text, but knowledge is what allows those strategies to work meaningfully in the first place.

The point about time is particularly important. Knowledge-building is cumulative by nature, so it’s unsurprising that short studies struggle to capture its long-term impact. If anything, that suggests we should be cautious about drawing big conclusions from short interventions.

Ultimately, reading comprehension probably isn’t best understood through isolated strategies or short-term trials, but through coherent curriculum, sustained knowledge development, and opportunities to think, talk and write about meaningful content over time.

Harriett Janetos's avatar

"Still, I and others have argued that it’s not a question of choosing between strategy instruction and knowledge. Rather, it’s about putting a particular text or topic in the foreground and bringing in whatever strategies—or skills or literacy standards—are appropriate to enable students to make meaning from it."

Unfortunately, this important message has often been lost in translation. Thank you for linking to my post, Fahrenheit 451: The Temperature at Which Discussions about Reading Comprehension Catch Fire (https://harriettjanetos.substack.com/p/fahrenheit-451-the-temperature-at?r=5spuf). I think it's important to include my attempt to contextualize my conclusions:

"On Decision-Making: Instructional constraint guides lesson planning through an understanding of the barriers to comprehension related to cognitive load, decoding deficits, and a lack of familiarity with sentence, paragraph, and text structure. A reminder from Blake Harvard (Do I Have Your Attention?): Without knowledge of human cognitive processes, instructional design is blind. (Part 1)

On Action Plans: Determining importance and conveying that importance to others in writing gives students a plan of action that they can apply to any text. This tactic for tackling complex text improves language comprehension through a careful examination of the assertions in the text, which in turn facilitates knowledge acquisition. (Part 2)

On Strategic Knowledge/Content Knowledge: Overall, these tactics should reflect high-utility comprehension strategies that facilitate analyzing and responding to text, allowing students to access information and reconstruct knowledge in their own words, strengthening neural pathways through the effort of explanation. (Part 3)

On Content Knowledge/Language Structures: Content knowledge can’t be extracted from text unless students have both sufficient decoding skills and language comprehension skills, and the patterns of language are what’s transportable across content. (Part 4)"

John McCaffery's avatar

Kia ora rā Natalie

Thank you for the analysis which I believe is far less defensive that almost all other international Structured Literacy responses and actual programme requirements by national education and or political entities.

We need to remember that the uses of context detached passages texts and resources may be only or mainly in the USA.

In Aotearoa New Zealand and Australia we have always based all integrated reading and writing literacy on content material of high curriculum or high personal relevance and interest and importance as all our TESOL and Literacy qualifications show

Our definition of literacy above very early elementary in based on the language and specific and often complicated text types forms as the literacy demands of the Curriculum Content areas use-ie SSci content, Maths content, Science content…

Claims that the all the world wide. educators are teaching and using isolated only or mainly strategies devoid of essential curriculum content is just not accurate or helpful at all . Assessment is then based on that Curriculum or experienced content

In reality is is Standardised Testing that always uses detached curriculum content the students have often never encountered -A major problem as you point out

I hope this response provokes the many National Curriculum and embedded TESOL programme educators to also respond .

Ngā mihi John McCaffery Teacher Education

UOA NZ

Sue Livingston's avatar

It's just so unfortunate that standardized tests of reading are still being used. As we know, they are unreliable and as you point out, in one study they were found to disagree half the time. As I see it, the problem lies in our conception of what reading is. Reading is not a content area and it should not be tested as if it is. It is a language process that allows us to gain meaning from written content. If we want to know how well that language process is coming along, test it using accommodating content and assess it according to what students show they have learned from reading it (yes, through writing intelligently about it). Isn't wanting to learn something the reason we read anyway?

Natalie Wexler's avatar

I generally agree, but I do think there's a place for standardized reading tests, as long as we recognize that they are primarily tests of general academic knowledge (and familiarity with complex syntax), not tests of reading comprehension in the abstract.

So it makes sense to use them as one factor in college admission, because the things they test are related to success in college. They also can provide us with a general idea of how some groups of students are doing as compared to other groups, providing a corrective to grading scales are often quite subjective.

But standardized comprehension tests should NOT be used as a guide to instruction or as a way of measuring incremental progress for individual students. And unfortunately that is what they're often used for.

Olivia Mullins's avatar

Good read. I think comprehension of informational texts is mostly dependent on academic knowledge, but perhaps not for narrative text. Language/syntax understanding also matters a lot, as you mention. Working memory certainly also plays a role. Even something like interest probably factors in. The baseball study is the only study that showed a role "only" for academic knowledge for content-dependent text, all the other correlation studies that I've found show a role for ability as well.

Natalie Wexler's avatar

First, thank you, Olivia, for your own detailed critique of Hansford et al's meta-analysis on the Curriculum Insight Project Substack, which pointed up some serious issues that I was not equipped to highlight. (https://curriculuminsightproject.substack.com/p/dont-reconsider-the-relationship)

Generally speaking, I agree with you that the role of academic knowledge is greater for informational or nonfiction text, but it can also play a key role in understanding narrative or, more accurately, fiction (and of course, not ALL narrative is nonfiction--some nonfiction narratives assume a lot of background knowledge). That's especially true for fiction set in unfamiliar eras or locales. If, for example, you're reading "Number the Stars" and don't know what rationing was during WWII, that could certainly affect your comprehension.

On the baseball study: I wouldn't call the knowledge at issue there "academic." It was of course domain-specific, but I think one reason the baseball study gets so much attention is that the knowledge involved was NOT academic. Baseball knowledge isn't correlated with general comprehension in the same way that, e.g., knowledge of history or science would be. That's why the results are so striking--the "poor" readers who were baseball experts could outshine the "good" readers who were not.

I'm not sure what you mean when you say the other correlation studies "show a role for ability, as well," but I suspect that by "ability" you mean familiarity with complex syntax and the kind of vocabulary found in written but not oral language. It's certainly true that those factors play a large role, especially with complex text. If, in the baseball study, the "poor readers" who were also baseball experts had been given a passage to read from (let's imagine) a PhD dissertation on baseball, I doubt the study would have found the same results.

Olivia Mullins's avatar

Thank you and I agree with these points! I was, somewhat badly, trying to argue that reading tests do measure quite a bit more than academic knowledge. Of course, obviously I think knowledge/academic knowledge is massively important. But as you say language and syntax is so important. I think working memory probably plays a role, but we can't change that with instruction. I do think ability to self-monitor matters - it's clearly part of the debate how well that can be taught, but I notice it in myself constantly. For example, I've learned I need to print a paper and take notes directly on it or it's too hard to keep the information straight.

To you point about fiction, sometimes I've been using "content-rich" and "content-light" because it is hard to address all the caveats. I've picked my kids' books to see how much content is in their fiction and it's interesting. Some Spy Kids (or similar) book I immediately opened to a page of information that I actively teach in our program. That series is full of content. But I flipped through some number of pages of a Harry Potter and found nothing outside of vocabulary that you'd learn in science or social studies.

For what it's worth - I count vocab as a knowledge sub-type.

Olivia Mullins's avatar

I wrote about this a few months ago if it's of interest - including more details about the correlational studies https://omullins.substack.com/p/which-is-more-important-for-reading

Harriett Janetos's avatar

Here is Chat GPT's take on Cohen vs. Kraft:

"Instead of choosing one framework categorically, you can ask:

What is the empirical distribution of effect sizes in comparable education interventions?

What is the counterfactual (business-as-usual baseline)?

What is the cost and scalability of the intervention?

What is the cumulative impact over time?

In other words: Effect size magnitude alone is insufficient. Context, feasibility, and comparison class matter."

Natalie Wexler's avatar

Yes, Kraft does argue for considering things like cost and scalability. But if you take a look at his paper (which I link to in the post), you'll find that he also argues for a different scale for determining the significance of effect sizes. Maybe ChatGPT missed that point.

Adelaide Dupont's avatar

And ChatGPT missed that point because the sources missed that point...