When I talk about effective writing instruction, I often mention the importance of feedback. It’s a crucial component of what cognitive scientists call “deliberate practice.” That concept that can be applied to acquiring any complex skill, like playing the violin or playing tennis.
First, a teacher breaks the process into manageable chunks and has students practice the chunk they need to work on. Then comes the feedback, which should be prompt and targeted. When students have mastered one chunk, the teacher then guides students to practice another chunk.
Deliberate practice can make the basics of a process automatic, freeing up cognitive capacity for higher-order thinking. If, for example, you master the fingering of the violin to the point where you don’t have to think about it anymore, you can focus on things like intonation.
The application of deliberate practice to literacy is most straightforward with foundational skills like decoding words or spelling. If students practice those things to automaticity, they have more capacity for comprehension or expression.
The picture is somewhat different when it comes to higher-order writing skills. No amount of practice will make them entirely automatic. Even highly experienced writers need to devote conscious effort to choosing words, constructing sentences, and organizing paragraphs and essays. And it will always be harder to write about a topic you don’t know much about than one that’s familiar. Still, it certainly helps to have had deliberate practice with higher-order writing skills—including prompt, targeted feedback.
Feedback Often Doesn’t Help
But will students’ writing improve because of feedback alone? The evidence suggests that for many, the answer is no. Feedback works best when it’s part of a systematic method of instruction.
An organization called No More Marking, which uses an innovative approach called comparative judgment to evaluate student writing, has been thoughtfully experimenting with combining human and AI-generated feedback. (They’re sponsoring a webinar on the topic specifically for US teachers; for information and registration, click here.) They’ve also analyzed what happens after students get feedback.
At one school the organization partnered with, most students were able to use feedback to make improvements, but others struggled. In at least one case, a student tried to respond to feedback but ended up making his writing worse.
And a recent study of over 900 students in Germany paints an even more dispiriting picture. It found that 20 percent of students who got feedback on their writing failed to “engage” with it. Of the 80 percent who did engage, almost half failed to improve their writing.
According to the German study, these findings are in line with other research. One study found that 71 percent of students who got feedback failed to engage with it, and another found that only 20 percent of students who engaged with it did so successfully.
Feedback Isolated From Instruction
It appears that these studies look at “feedback” in isolation rather than as part of an instructional process. But if students haven’t been taught what to do in order to act on feedback, it’s not surprising that so many ignore it or fail to use it to improve their writing.
The German study asked students in grades seven to nine to draft a formal email in English, after which they received computer-generated feedback based on five criteria. They were then asked to revise their work based on the feedback, and their revised emails were again scored by a computer.
The study doesn’t indicate what kind of writing instruction these students might have gotten before they participated in the experiment and whether it had any relationship to the feedback they received. Instead, researchers looked at a range of other variables that might have affected performance, such as gender and cognitive abilities.
Nor did the study make a distinction between the kinds of feedback participants got. Some of it seems pretty easy to implement. For example, if students hadn’t used an appropriate greeting and closing, they were alerted to that, and some got examples to use (e.g., “Dear Mr./Mrs./Ms.”). Not surprisingly, students who got hints and examples were more likely to improve their drafts.
But another criterion was “Does the language used in your email match the criteria of a formal email in grammar and wording?” This feedback is far more difficult to act on, even with hints and examples. Similarly, a “tip” like “Revise sentences that are difficult to understand” is unlikely to be helpful if you haven’t been taught how to do that. You might not even be able to figure out what makes your sentences difficult to understand.
Researchers like to isolate a particular “intervention,” like giving feedback, and test its effectiveness. But it doesn’t make sense to isolate feedback from instruction, or to put all feedback in the same basket. And neither of those things make sense in the classroom either.
A basic problem with writing instruction in the U.S. is that few teachers have gotten good training in how to do it, and their curriculum materials generally don’t provide much guidance. That can affect the quality of their feedback, too. Without adequate support, teachers may be able to provide only vague suggestions like “make it better.” Even if they say something more specific like “vary your sentence structure,” students may not be equipped to do that if they haven’t been explicitly taught how to construct different kinds of sentences.
Computer vs. Human Feedback
Another factor that can prevent students from benefiting from feedback is who delivers it: a computer or a human being. In the German study, the feedback came from a computer, and there’s evidence that students are more likely to respond when the source is a teacher or peer.
But of course, human-provided feedback can be extremely time-consuming. No More Marking’s hybrid approach, combining AI and teacher feedback, seems like a possible solution. In a recent post, NMM’s education director Daisy Christodoulou describes the mix of feedback participating teachers and students get, including audio feedback recorded by teachers that AI turns into comments for individual students.
The post also details how one teacher used NMM’s feedback in her classroom She first gave students their AI-generated reports and associated multiple-choice questions, which focused on three areas of writing: capital letters, vocabulary, and run-on sentences. Then she went over each of those areas with the whole class.
After that, she read aloud the feedback on one student’s writing and had the class collectively come up with a list of strategies to address common problems. In addition, she asked ChatGPT to provide five new targets for each student to focus on and provide those who needed more support with examples of improved sentences. After participating in all of that preparation, students were given their original drafts to revise.
You might call that “giving feedback,” but I’d say it’s essentially instruction based on feedback. And the three areas the NMM feedback focused on were fairly straightforward compared to something like “organize your essay more coherently.”
Many students do need help with complex skills like that, but basing instruction on such global feedback would put the cart before the horse. The initial focus should be on teaching students how to create clear linear outlines and turn them into coherent essays—a multi-step process that requires lots of deliberate practice, including feedback, at each step—rather than on providing feedback on a final product.
Of course, the ultimate goal is to enable students to write well independently, without the need for much feedback. That can take a lot of deliberate practice, especially for students who face steep challenges. But with an explicit, systematic method of writing instruction, it’s eminently doable.
An Exemplary District—And a New Podcast
In my book Beyond the Science of Reading, I tell the story of a high-poverty school district in Louisiana, Monroe City, that adapted The Writing Revolution method to the content of the curriculum it was using. The results have been impressive. I also delve into what happened in Monroe in a forthcoming episode of Season 3 of The Knowledge Matters Podcast. (Note: I co-authored the book The Writing Revolution with the creator of the method, Judith Hochman.)
I’m co-hosting the six-episode podcast season, called “Literacy and the Science of Learning,” with Dylan Wiliam and Doug Lemov. Three episodes are already available, with more coming out every Tuesday. My two episodes—the last two—both focus on writing.
Episode 5, dropping on July 22, deals with the application of cognitive science to writing. Episode 6, out on July 29, takes listeners to Monroe to discover what can happen when explicit writing instruction is combined with a content-rich curriculum. Spoiler alert: teachers in Monroe discovered that when they taught students to write in clearer and more complex ways, students began to think in clearer and more complex ways.
I hope you’ll check out all six episodes of the podcast.
It would make my job considerably easier as a teacher of writing if I just offered feedback on surface issues in my students’ writing. What is considerably harder is what seems to be the elephant in the room here and that is talking about how what students are reading for a particular writing assignment is accurately and logically connected to the point of the assignment. How many times have we realized that the reason for hard-to-understand writing is misunderstood reading? Surely more useful feedback would have students discuss with their teacher how and why they integrated a particular reading into their writing assignment.
I recommend taking a very close look at Quill.org’s approach to using AI for feedback. Inspired by Judith Hochman’s approach, this nonprofit has carefully, thoughtfully, ethically, and with great pedagogical integrity trained its AI feedback using expert guidance from teachers in the field.