13 Comments

"Would the same results hold for, say, teaching literary or historical analysis, which isn’t just a matter of remembering discrete bits of information? It may be that you need a human being for that." Yes. You absolutely need a human being for that. 😳

Expand full comment

I appreciate the ideas in this post and would like for schools with low ratings to give these ideas a try. Often, I suspect that our teaching units cover too much material. Maybe less would be more in the long run. I particularly join those who advocate for frequent testing followed by rapid feedback and correct answers. In addition, I support the idea of teachers talking less and asking more questions that require extended answers. I hope to ask questions that require thinking.

I fear that once state exams became over-valued as predictors of mastery, schools across the nation began teaching kids ways to identify the correct multiple-choice answers. Teaching children to analyze the exam has replaced teaching them to think and master information.

To summarize, I support the post's suggestions to 1) teach smaller chunks of information, 2) test frequently, 3) quickly provide feedback along with correct answers, and 4) provide examples to clarify the content of instruction.

Expand full comment

Many of these cognitive strategies sound like Explicit Instruction.

Expand full comment

You could call it that. But I've found that many people assume the words "explicit instruction" means just lecturing, when obviously it's much more than that.

Expand full comment

I would love to see a list of those strategies and how to implement them.

We are homeschooling the kids so they basically are 100% tutored

Expand full comment

You might want to consult what has become known as Rosenshine's Principles of Instruction, sort of a bible for instruction grounded in cognitive science, which you can find here:

https://www.aft.org/sites/default/files/Rosenshine.pdf

For something that's more specifically directed at homeschoolers, you could check out the British publisher John Catt, which specializes in cognitive science and education. They list two titles on their "Books for Parents" page (I haven't read either of them, so I can't vouch for them -- but the one by Patrice Bain, A Parent's Guide to Powerful Teaching, is likely to be good!):

https://www.johncattbookshop.com/collections/parent-essentials

Expand full comment

I will check them out, thanks@

Expand full comment

Hi Theodore. I'm confused. What Koedinger study are you referencing? You say, "In this study, students played a simple computer game, and "mastery" was defined as achieving an 80% success rate". However that was not the case in this research: Koedinger, K. R., Carvalho, P. F., Liu, R., & McLaughlin, E. A. (2023). An astonishing regularity in student learning rate. Proceedings of the National Academy of Sciences of the United States of America, 120(13), e2221311120. https://doi.org/10.1073/pnas.2221311120

Expand full comment

Dear Sanie,

Thank you for your comment.

First of all, my main point was not to critique the Koedinger paper, but rather to object to making a strong statement such as "Students all learn at the same rate". If Natalie had said something like, "A recent study has suggested that perhaps learning rates are more consistent that we previously suspected", then that would be much more defensible and reasonable, although of course it would also be much less exciting! This sort of thing happens all the time (not just in education) -- a single study is hyped and very strong generalizations are made with complete confidence. These sorts of exciting results almost invariably don't pan out, so you would think that over time people would learn to be a little more guarded in their enthusiasm, but that never seems to be the case.

Expand full comment

The Koedinger et al. paper is very difficult to read, even for an academic paper. My background is in biostatistics, not educational statistics, so I have to plead some measure of ignorance. However, here are a few criticisms:

* The paper reports on 27 different datasets, which range from primary school to college, so it's an extremely heterogeneous sample.

* There doesn't seem to be any discussion of how students were recruited, what their backgrounds were, any exclusion criteria, etc. So it's impossible to understand what the study population is.

* The data are based on a system called LearnLab, and it's hard to find out any information about this system other than a few screenshots in the supplementary materials.

* The authors use a very complicated Bayesian regression model so that it's difficult to interpret their results in a simple intuitive manner, nor do they provide basic descriptive statistics to justify their conclusions.

Koedinger et al. clearly state that they define "mastery" as achieving an 80% success rate. For instance, in the third paragraph after the abstract, starting with "In this paper, we combine these cognitive models . . . " they list their research objective, and for the first one they state: "How many practice opportunities do students need to reach a mastery level of 80% correctness?" They use this threshold in other parts of the paper as well.

Expand full comment

According to the article in The Hechinger Report by Jill Barshay, which Natalie seems to be using as her source, "The fastest quarter of students improved their accuracy on each concept (or knowledge component) by about 2.6 percentage points after each practice attempt, while the slowest quarter of students improved by about 1.7 percentage points." That's slightly above a 50% increase for the faster students. Over 7 trials that would result in a difference of over 6 percentage points, so it does seem that some students really do learn faster than others.

I agree that I was sloppy to describe the intervention as "students played a simple computer game". Instead, Koedinger et al. describe their intervention as "intelligent tutoring systems, educational games, and online courses". Again, this is so heterogeneous that it's difficult to make sense of what was going on, or how to make any sort of coherent generalization. For what it's worth, in the Supplementary Materials they have a screenshot of a primitive Battleship game. So, yes, they were playing simple computer games some of the time, but it was inaccurate for me to describe it the way I did, because it makes it sound like they were playing something like Grand Theft Auto.

Expand full comment

Just to weigh in here: First, I was alerted to the Carnegie Mellon study (or Koedinger study, if you like) via Jill Barshay, but I did actually read the entire study myself (albeit not the appendices).

I agree that the findings sound implausible! And there does seem to be a contradiction in the study that I can't quite figure out. As you note, the researchers define "mastery" as 80% correct, and they say that some learners started out at 75% correct on a pretest whereas others started at maybe 55%. It stands to reason that those who start out at 75% will require fewer "learning opportunities" to reach 80% than those who start at 55%. So I don't really know where they get the finding that everyone needs about 7 learning opportunities to reach mastery.

On the other hand -- and this is what I tried to say in my post -- it's possible that they still found that everyone learns about the same amount during a "learning opportunity. " It's just that some are going to need more of those opportunities than others to reach mastery. That's my interpretation of what they mean when they say all participants in the study learned at the same rate.

I don't have a background in statistics myself (in fact, I dropped out of a PhD program many years ago partly to avoid taking a statistics exam -- my loss!). But I trust Jill, who knows more about statistics and research methods than I do and who has poked holes in highly touted research before. In addition, the researchers themselves seemed quite surprised by their finding. Their goal in the study was to figure out why some individuals learn more quickly than others--not to try to prove that everyone learns at the same rate. So those things inclined me to feel the findings, while seemingly implausible, had some merit.

But as I also try to say in my post, these findings occurred under highly artificial circumstances and also probably don't generalize to many other subjects. I brought in the study only to argue that we could do a lot more to help students learn--not to argue that we could enable all students to learn at the same rate.

Expand full comment

One aspect of educational "research" that I find frustrating is how weak or mediocre work is accepted uncritically, and results are often hyped far beyond what the actual study justifies. The Koedinger study is an example of this. In this study, students played a simple computer game, and "mastery" was defined as achieving an 80% success rate. The Hechinger Report article clearly states that the top quarter of students learning pace was 50% higher than the bottom quarter, so even using their own data there is a clear difference in learning speeds. And it's completely unjustified to generalize this one study to the claim that "students all learn at the same rate:. At best, all you can conclude is that the computer game used for the study was unchallenging, so that everyone could eventually achieve "mastery". Pretty much everyone can learn how to operate a cash register at a supermarket checkout line, but that doesn't mean that everyone can learn at the same pace; instead, it means that operating a cash register is not very intellectually challenging.

This study is literally incredible -- as in, it is not possible to believe it. Anyone with actual teaching experience will quickly see that students do not all learn at the same pace, and even when everyone starts from an initial point of complete ignorance some students will learn much faster than others.

Expand full comment