Discussion about this post

User's avatar
Andrew Cohen's avatar

Another theory I have is that UK traditions of the GCSE and A-Level exams drive much of the difference.

Unlike U.S. standardized exams (e.g. SAT), which are almost exclusively *aptitude*-based (e.g. reading, math), the GCSE and A-Levels in the UK include many exams that are *knowledge*-intensive (e.g. history, biology, geography, etc.)

Britain's culture of requiring students to actually *retain* knowledge for the long-term (in *summative* exams at the end of a school year) makes it almost impossible for students to succeed without developing legit study skills -- which in turn incentivizes UK teachers, students, and school systems to understand the cognitive science principles behind retention.

In the U.S., in contrast, "memorization" has been thought of as a dirty word for decades. ("You're just training robotic kids to regurgitate dates. We need to teach them to 'think' instead," etc.) The most we tend to test on knowledge is via medium-stakes class midterm or final exams, which impact only that class's final grade, but not a student's overall university prospects in the same way as standardized test scores do.

And then the American kids get to university and we wonder why nobody actually knows how to study 🤷‍♂️

Expand full comment
The AI Architect's avatar

Compelling analysis of why assessment design is the lever that actually moves instructional practice. The contrast between Progress 8 (measuring actual content mastery across 8 subjects plus growth) versus U.S. abstract comprehension tests is critcal. What makes Progress 8 clever is that it doesn't just identfy which schools work, it reveals which instructional models work, becuase the metric itself aligns with cognitive science principles about knowledge building. The U.S. testing regime actually incentivizes the opposite of what cog sci recommends.

Expand full comment
17 more comments...

No posts

Ready for more?