Another theory I have is that UK traditions of the GCSE and A-Level exams drive much of the difference.
Unlike U.S. standardized exams (e.g. SAT), which are almost exclusively *aptitude*-based (e.g. reading, math), the GCSE and A-Levels in the UK include many exams that are *knowledge*-intensive (e.g. history, biology, geography, etc.)
Britain's culture of requiring students to actually *retain* knowledge for the long-term (in *summative* exams at the end of a school year) makes it almost impossible for students to succeed without developing legit study skills -- which in turn incentivizes UK teachers, students, and school systems to understand the cognitive science principles behind retention.
In the U.S., in contrast, "memorization" has been thought of as a dirty word for decades. ("You're just training robotic kids to regurgitate dates. We need to teach them to 'think' instead," etc.) The most we tend to test on knowledge is via medium-stakes class midterm or final exams, which impact only that class's final grade, but not a student's overall university prospects in the same way as standardized test scores do.
And then the American kids get to university and we wonder why nobody actually knows how to study 🤷♂️
Compelling analysis of why assessment design is the lever that actually moves instructional practice. The contrast between Progress 8 (measuring actual content mastery across 8 subjects plus growth) versus U.S. abstract comprehension tests is critcal. What makes Progress 8 clever is that it doesn't just identfy which schools work, it reveals which instructional models work, becuase the metric itself aligns with cognitive science principles about knowledge building. The U.S. testing regime actually incentivizes the opposite of what cog sci recommends.
Fascinaitng comparative analysis of education reform through the lens of assessment design. The Progress 8 metric genuinely embodies cognitive science principles by measuring growth across content domains, whereas US abstract comprehension tests actively work against knowledge-building. What's particuarly striking is how England's accountability system doesn't just identify sucessful schools, it reveals which pedagogical approaches align with how memory and learning actually function, creating a feedback loop that amplifies evidence-based practice.
Natalie, as always I truly enjoy and appreciate your creative and critical take. As an American who has worked for the UK government as an education adviser and researcher over the past decade, I have experienced what you're referring to first hand and it is commendable what Gibb achieved...including consequtive accountability frameworks that have been defined under Ofsted's Chief Inspector Amanda Spielman. I also appreciate your nuanced discussion of what is possible in the US and how assessments in particular need to change alongside any interventions in teacher prep.
I wanted to make one rather nuanced comment about the nature of the English accountability system, as I have experienced it. While it is true that England did not mandate a single instructional script beyond phonics, it is misleading to conclude that the government really “let a thousand flowers bloom”, as Gove would say. Through the Teachers’ Standards, the Core Content Framework, the Early Career Framework, and the Ofsted Education Inspection Framework, the government effectively prescribed a common instructional spine - explicit teaching, cognitive load principles, behaviour routines, and systematic practice... and in a way did prescribe a preferred theoretical foundation that was based on what I think is rather skewed and limited literature review which was not open for discussion (only after the fact). Autonomy exists, but only above a tightly defined floor - which could be a problem when teacher prep is not rigorous enough to cultivate professional judgment in teachers that will allow them to engage critically with evidence and facilitate rigorous teaching and learning.
Yes, the "let a thousand flowers bloom" statement from Gove was probably at least a bit disingenuous. The government did put a thumb on the scales in the ways that you mention.
I don't know what you feel was left out of the literature review, but I do wonder if the message is being communicated to teachers in England (and other places that have embraced the "science of learning") that students need more than explicit teaching and retrieval practice in order to develop deep knowledge and the ability to analyze information. They also need to be guided to elaborate on the information they're absorbing, and that's not something that I saw mentioned in Gibb's book.
Yes, I agree. There is a lot to say on this point. What you’re saying about smart implementation is dependent on getting a rigorous professional development and/or teacher prep that cultivates that deeper understanding, ability to reason and enable the meaningful implementation of the policy…which I am afraid has not happened to the same level that we see Finland investing in their teacher prep and CPD. There is instead a lot of expectations for professionalism without the adequate support, freedoms without the support other than what standards can provide as guidance. Cog science arguments also often lead to simplified practices or at least do not offer the theoretical foundation of pedagogy that is needed to make sense of these - I’ve tried to offer a nuanced argument about this here - https://plamenapehlivanova.substack.com/p/beyond-the-knowledge-wars-reclaiming
The two problems that you wrote that ED Hirsch and the Knowledge Matters Movement are trying to solve.... nope, not it. The problem is curriculum that is not rich or coherent calling itself rich and coherent, and too much emphasis on rigorous academic standards that are too abstract to even teach in the first place.
"The more information you have about a topic stored in long-term memory, the better able you are to understand a text on that topic or to think about it critically."
Obviously there's a lot more that could be said about the role of background knowledge in learning and how best to "activate" it, but that wasn't the focus of this post.
Why do we have to look at cognitive science and teaching methodology as a dichotomy, one or the other? Why can't we view teaching as more nuanced - direct teaching of unknown content can co-exist with teachers facilitating/guiding children's thinking?
I think the bigger problem is the standardized testing and its use for, not just evaluating children's learning but for evaluating a teacher's performance. If we want a content-rich curriculum, then what we focus on for testing must change.
Learning can be joyful and motivating when the curriculum is interesting and opens up the world for children. Children from higher income families come to school with a much wider experience background. Part of the job of schooling/teaching is to provide those rich experiences for children from less wealthy families and connect them to skills they need. Skill teaching without content is boring - probably for the teacher and the children they teach.
I think that this blog may be guilty of premature evaluation.
You use just one data point (PIRLS 2021) to demonstrate improvements in England. That study states that "Since 2016, performance at each of the International Benchmarks in England has seen no statistically significant changes."
It also states that: "Internationally 46% of pupils said that they very much like reading, this compares to 29% of pupils in England who very much like reading. The proportion of pupils in England reporting that they enjoy reading is lower than in previous cycles."
England may well have improved some aspects of its school system, but outcomes data doesn't yet demonstrate this (in particular, our National Reference Test scores have barely changed since their introduction).
The claim isn't that England has succeeded in "fixing" its education system. That is a long-range and perhaps never-ending project, in any country.
Rather, I'm arguing that England has figured out a way to reliably highlight schools that are doing a good job of actually educating their students, enabling those schools to serve as models for others. That's not a guarantee that the success of those schools will spread across the country, but it may well be a necessary condition for that to happen.
For one thing, that Progress 8 measure -- and other government initiatives -- appears to have convinced many teachers that the education orthodoxy they learned during their training conflicts what what science tells us about how children learn. That's only the beginning of the needed changes, but it's pretty huge, and it hasn't happened in the U.S.
Thank you for this, Natalie! People have kept asking me why Brainscape has been so much more popular in the UK (proportional to population), despite our being a US-based company, and now I have a great article to point to 🙏 🎯
While always appreciative of the work and context, I'm curious what measures you are using to establish that this worked beyond the rankings shift on the PIRLS test? That feels like very little data to make quite a far-reaching assumption—which for me echoes how much of the Ed Reform movement has operated in the past few decades in the United States.
Not suggesting any of this is wrong, but rather wondering what evidence is being offered to justify such a confident claim about what is and is not working?
Perhaps I should have spent more time in the post discussing evidence of success. In response to another commenter on the PIRLS data, I noted that scores on England's own Year 1 phonics check have also improved dramatically. The pass rate went from 33% (for a pilot in 2011) to 82% in 2018.
But beyond phonics, I would say that the most compelling evidence of success is the Progress 8 "league tables," where the top schools -- many of them primarily serving disadvantaged students -- have achieved pretty stunning results with methods grounded in cognitive science. According to that data, some of these schools are adding the equivalent of a year or even two of learning beyond the average for their students.
England's own longitudinal data don't yet show overall improvement on English assessments (although they did for math, before the pandemic), but getting nationwide results takes time--even in a relatively small country like England. Still, the schools that aren't yet hitting it out of the park can look to those who are (as identified by Progress 8) and emulate what they're doing.
That process of emulation may have been slowed by the fact that (as I mention in a footnote) England hasn't historically had readily available detailed instructional resources. But some of the top-performing schools on Progress 8 are now making the resources they have created available to other schools. That may help speed up progress throughout the country.
Thanks for this response! It stills to me like there is a clear/unequivocal argument based on the data that this is working and therefore deserves to be emulated elsewhere, and for me after the past decades of education reform in the US I tend to prefer caution before whole-scale adoption. (Especially given the widespread variety of contexts/policies state-to-state in our country, making comparative data even trickier to navigate.)
But having these other indicators definitely adds weight to the argument, and I appreciate you taking the time to offer it!
"on the international PIRLS test, given to fourth-graders, England rose from tenth place in 2011 to fourth place in 2021, out of 43 participating jurisdictions." No mention of the raw scores? Australia rose to 10th in PISA last time, only because other countries' raw scores went down even quicker... (the Steven Bradbury effect)
If "other countries' raw scores went down even quicker," you could argue that the fact that England's did not "go down" as much (presumably an effect of the pandemic, which affected virtually every country) is itself a sign of success. But I'm really not interested in getting into a debate over PIRLS. It was not the focus of the post, and I haven't looked at the raw scores.
I will just add that scores on England's own Year 1 phonics check have improved dramatically. The pass rate went from 33% (for a pilot in 2011) to 82% in 2018.
Another theory I have is that UK traditions of the GCSE and A-Level exams drive much of the difference.
Unlike U.S. standardized exams (e.g. SAT), which are almost exclusively *aptitude*-based (e.g. reading, math), the GCSE and A-Levels in the UK include many exams that are *knowledge*-intensive (e.g. history, biology, geography, etc.)
Britain's culture of requiring students to actually *retain* knowledge for the long-term (in *summative* exams at the end of a school year) makes it almost impossible for students to succeed without developing legit study skills -- which in turn incentivizes UK teachers, students, and school systems to understand the cognitive science principles behind retention.
In the U.S., in contrast, "memorization" has been thought of as a dirty word for decades. ("You're just training robotic kids to regurgitate dates. We need to teach them to 'think' instead," etc.) The most we tend to test on knowledge is via medium-stakes class midterm or final exams, which impact only that class's final grade, but not a student's overall university prospects in the same way as standardized test scores do.
And then the American kids get to university and we wonder why nobody actually knows how to study 🤷♂️
Compelling analysis of why assessment design is the lever that actually moves instructional practice. The contrast between Progress 8 (measuring actual content mastery across 8 subjects plus growth) versus U.S. abstract comprehension tests is critcal. What makes Progress 8 clever is that it doesn't just identfy which schools work, it reveals which instructional models work, becuase the metric itself aligns with cognitive science principles about knowledge building. The U.S. testing regime actually incentivizes the opposite of what cog sci recommends.
Fascinaitng comparative analysis of education reform through the lens of assessment design. The Progress 8 metric genuinely embodies cognitive science principles by measuring growth across content domains, whereas US abstract comprehension tests actively work against knowledge-building. What's particuarly striking is how England's accountability system doesn't just identify sucessful schools, it reveals which pedagogical approaches align with how memory and learning actually function, creating a feedback loop that amplifies evidence-based practice.
Natalie, as always I truly enjoy and appreciate your creative and critical take. As an American who has worked for the UK government as an education adviser and researcher over the past decade, I have experienced what you're referring to first hand and it is commendable what Gibb achieved...including consequtive accountability frameworks that have been defined under Ofsted's Chief Inspector Amanda Spielman. I also appreciate your nuanced discussion of what is possible in the US and how assessments in particular need to change alongside any interventions in teacher prep.
I wanted to make one rather nuanced comment about the nature of the English accountability system, as I have experienced it. While it is true that England did not mandate a single instructional script beyond phonics, it is misleading to conclude that the government really “let a thousand flowers bloom”, as Gove would say. Through the Teachers’ Standards, the Core Content Framework, the Early Career Framework, and the Ofsted Education Inspection Framework, the government effectively prescribed a common instructional spine - explicit teaching, cognitive load principles, behaviour routines, and systematic practice... and in a way did prescribe a preferred theoretical foundation that was based on what I think is rather skewed and limited literature review which was not open for discussion (only after the fact). Autonomy exists, but only above a tightly defined floor - which could be a problem when teacher prep is not rigorous enough to cultivate professional judgment in teachers that will allow them to engage critically with evidence and facilitate rigorous teaching and learning.
Yes, the "let a thousand flowers bloom" statement from Gove was probably at least a bit disingenuous. The government did put a thumb on the scales in the ways that you mention.
I don't know what you feel was left out of the literature review, but I do wonder if the message is being communicated to teachers in England (and other places that have embraced the "science of learning") that students need more than explicit teaching and retrieval practice in order to develop deep knowledge and the ability to analyze information. They also need to be guided to elaborate on the information they're absorbing, and that's not something that I saw mentioned in Gibb's book.
Yes, I agree. There is a lot to say on this point. What you’re saying about smart implementation is dependent on getting a rigorous professional development and/or teacher prep that cultivates that deeper understanding, ability to reason and enable the meaningful implementation of the policy…which I am afraid has not happened to the same level that we see Finland investing in their teacher prep and CPD. There is instead a lot of expectations for professionalism without the adequate support, freedoms without the support other than what standards can provide as guidance. Cog science arguments also often lead to simplified practices or at least do not offer the theoretical foundation of pedagogy that is needed to make sense of these - I’ve tried to offer a nuanced argument about this here - https://plamenapehlivanova.substack.com/p/beyond-the-knowledge-wars-reclaiming
The two problems that you wrote that ED Hirsch and the Knowledge Matters Movement are trying to solve.... nope, not it. The problem is curriculum that is not rich or coherent calling itself rich and coherent, and too much emphasis on rigorous academic standards that are too abstract to even teach in the first place.
"The more information you have about a topic stored in long-term memory, the better able you are to understand a text on that topic or to think about it critically."
Have you had a chance to read Sarah Cottinghatt's piece, The prior knowledge paradox: what to do with what they know (https://cognitivecoaching.substack.com/p/the-prior-knowledge-paradox-what). I think you'll find it very interesting.
Yes, that's an interesting piece -- thanks.
Obviously there's a lot more that could be said about the role of background knowledge in learning and how best to "activate" it, but that wasn't the focus of this post.
Why do we have to look at cognitive science and teaching methodology as a dichotomy, one or the other? Why can't we view teaching as more nuanced - direct teaching of unknown content can co-exist with teachers facilitating/guiding children's thinking?
I think the bigger problem is the standardized testing and its use for, not just evaluating children's learning but for evaluating a teacher's performance. If we want a content-rich curriculum, then what we focus on for testing must change.
Learning can be joyful and motivating when the curriculum is interesting and opens up the world for children. Children from higher income families come to school with a much wider experience background. Part of the job of schooling/teaching is to provide those rich experiences for children from less wealthy families and connect them to skills they need. Skill teaching without content is boring - probably for the teacher and the children they teach.
I think that this blog may be guilty of premature evaluation.
You use just one data point (PIRLS 2021) to demonstrate improvements in England. That study states that "Since 2016, performance at each of the International Benchmarks in England has seen no statistically significant changes."
It also states that: "Internationally 46% of pupils said that they very much like reading, this compares to 29% of pupils in England who very much like reading. The proportion of pupils in England reporting that they enjoy reading is lower than in previous cycles."
https://assets.publishing.service.gov.uk/media/661667a756df202ca4ac0538/PIRLS_2021_national_report_for_england.pdf
England may well have improved some aspects of its school system, but outcomes data doesn't yet demonstrate this (in particular, our National Reference Test scores have barely changed since their introduction).
More on national reference test here https://www.gov.uk/government/publications/the-national-reference-test-in-2025
The claim isn't that England has succeeded in "fixing" its education system. That is a long-range and perhaps never-ending project, in any country.
Rather, I'm arguing that England has figured out a way to reliably highlight schools that are doing a good job of actually educating their students, enabling those schools to serve as models for others. That's not a guarantee that the success of those schools will spread across the country, but it may well be a necessary condition for that to happen.
For one thing, that Progress 8 measure -- and other government initiatives -- appears to have convinced many teachers that the education orthodoxy they learned during their training conflicts what what science tells us about how children learn. That's only the beginning of the needed changes, but it's pretty huge, and it hasn't happened in the U.S.
Thank you for this, Natalie! People have kept asking me why Brainscape has been so much more popular in the UK (proportional to population), despite our being a US-based company, and now I have a great article to point to 🙏 🎯
Yes, you definitely have me convinced about the importance of one curriculum that you can test against.
Decide what kids need to learn, teach them.And then test them on what they should have learned.It really is that simple
While always appreciative of the work and context, I'm curious what measures you are using to establish that this worked beyond the rankings shift on the PIRLS test? That feels like very little data to make quite a far-reaching assumption—which for me echoes how much of the Ed Reform movement has operated in the past few decades in the United States.
Not suggesting any of this is wrong, but rather wondering what evidence is being offered to justify such a confident claim about what is and is not working?
Perhaps I should have spent more time in the post discussing evidence of success. In response to another commenter on the PIRLS data, I noted that scores on England's own Year 1 phonics check have also improved dramatically. The pass rate went from 33% (for a pilot in 2011) to 82% in 2018.
But beyond phonics, I would say that the most compelling evidence of success is the Progress 8 "league tables," where the top schools -- many of them primarily serving disadvantaged students -- have achieved pretty stunning results with methods grounded in cognitive science. According to that data, some of these schools are adding the equivalent of a year or even two of learning beyond the average for their students.
England's own longitudinal data don't yet show overall improvement on English assessments (although they did for math, before the pandemic), but getting nationwide results takes time--even in a relatively small country like England. Still, the schools that aren't yet hitting it out of the park can look to those who are (as identified by Progress 8) and emulate what they're doing.
That process of emulation may have been slowed by the fact that (as I mention in a footnote) England hasn't historically had readily available detailed instructional resources. But some of the top-performing schools on Progress 8 are now making the resources they have created available to other schools. That may help speed up progress throughout the country.
Thanks for this response! It stills to me like there is a clear/unequivocal argument based on the data that this is working and therefore deserves to be emulated elsewhere, and for me after the past decades of education reform in the US I tend to prefer caution before whole-scale adoption. (Especially given the widespread variety of contexts/policies state-to-state in our country, making comparative data even trickier to navigate.)
But having these other indicators definitely adds weight to the argument, and I appreciate you taking the time to offer it!
"on the international PIRLS test, given to fourth-graders, England rose from tenth place in 2011 to fourth place in 2021, out of 43 participating jurisdictions." No mention of the raw scores? Australia rose to 10th in PISA last time, only because other countries' raw scores went down even quicker... (the Steven Bradbury effect)
If "other countries' raw scores went down even quicker," you could argue that the fact that England's did not "go down" as much (presumably an effect of the pandemic, which affected virtually every country) is itself a sign of success. But I'm really not interested in getting into a debate over PIRLS. It was not the focus of the post, and I haven't looked at the raw scores.
I will just add that scores on England's own Year 1 phonics check have improved dramatically. The pass rate went from 33% (for a pilot in 2011) to 82% in 2018.