Having just concluded TWR 3-12 Intro class yesterday, I can testify to the importance of explicite writing instruction beginning at the sentence level. I think it’s important that the same continuum and language is used across grade levels, just as in reading, so that skills like sentence writing, single paragraph organization and outlining, etc become automatic as students advance from grade to grade. It makes me wonder if too much “ journaling and creative writing” is taking up valuable instructional writing time, in an attempt to get kids comfortable with writing. Wouldn’t automaticity in writing skills make kids comfortable with writing, just as knowing math facts make students comfortable with higher level math?
Are you familiar with any early elementary AI literacy tutor products? The district where I teach requires us to use one such program called Amira. Research seems very spotty. The studies demonstrating it's efficacy cited on Amira's website mostly either have the company logo all over, or are 10+ years old and do not directly reference the specific Amira product. I have a funny feeling about it, something seems off/slightly nefarious. Students are to read onscreen stories aloud to basically a chatbot with a 2D onscreen avatar who "helps" them sound out words. It seems fine for very young emergent readers, but I teach 3rd grade so it gets sorta dicey having a chatbot trying to teach/assess comprehension skills and strategies on the completely random passages it provides them to read. Some passages have citations that show what they were taken from. Some don't, which I suspect are AI generated text.
I have been reading and hearing a lot about chatGPT in older grades, but remain really curious to hear your thoughts on AI in the context of reading tutors for K-3 students. Thanks!
Emily, I argue that AI isn’t the issue here. It sounds like there is a huge design flaw in the task in the program. Comprehension is when the reader makes a mental model based on what the text says + the reader’s prior knowledge. Focusing on comprehension skills and strategies like “inferencing” or “finding the main idea” will likely not support understanding of a text. Instead, focus on using text sets based on topical ideas, ask questions about the difficult parts of text and have kids share out loud how they make meaning of those parts, give kids structures and language for expressing their knowledge. You can read more here: https://whitneywhealdon.substack.com/p/reading-misunderstood?r=f8p8m
Whitney, yes, definitely love and agree with everything you're saying! My school district requires us to use the comprehension skills and strategies approach in whole group teaching and built in to the AI tutor, and I am well aware of the issues and have been advocating for change. The conversation I'm most interested in having is about whether an AI "tutor" tool can be effective to support early readers in learning to decode and/or build their knowledge for comprehension. If it is not effective, and since there are many other concerns regarding data privacy, how do teachers respond to being asked to use these tools on a daily basis?
Great questions! Are there expectations from your leadership with how you are to use results from the tutor? Do all kids have to use Amira? Also, how much time are you expected to spend on it? And do you have time to build knowledge and understanding outside of Amira? Asking because it sounds like it would be more about mitigation and reframing the time spent on Amira than getting rid of it given your situation, but curious to learn more.
Big fan of your work, Natalie. The Knowledge Gap had a profound impact on how I think about curriculum and building knowledge for students. I share your concern about the risks of cognitive offloading, but I also believe this is precisely why AI literacy is becoming such an essential and evolving part of our educational practice. With great power comes great responsibility, and generative AI is a powerful tool. When students are taught to critique outputs, revise drafts, and discern when not to use AI, it can sharpen their thinking.
We’re still in the very early stages of figuring out how to integrate AI wisely in schools. It’s messy, evolving, and inevitable. The real challenge isn’t stopping AI, it’s preparing students to use it with intention and integrity. Thank you for sparking and leading such an important conversation.
Yes, we do need to guide students to use AI in a way that supplements rather than supplants learning, but the question is how we can do that. As I suggest in the post, if we give students unfettered access to AI, it may be difficult for many of them to resist the temptation to just have it do the cognitive work for them, even if they know that's actually a bad idea. I've seen quotes from students essentially saying just that.
I read here because I think of myself as an "educator" in the broadest sense: because I think it is our natural human duty to show our fellow human beings our given nature: what is right and wrong etc. and as we naturally would - should - determine value on what is observable in all of us. It is naturally based in teaching children but clearly extensive to showing all our kind to our proper living order. The most disturbing thing to me about AI is that no one seems to have the capacity to place the proper value on it. It is just one more in a long line of artefacts of "progress" ill defined to the evidence of the almost-as-long line of "unintended consequences". There seems to be no proper rational grounds yet, no sufficient reason given, and certainly not objective evidence of the supposed "good" of AI and yet so many are willing to just roll the dice - because its there and thus unavoidable. But is it so? And who dare question those who keep insisting on its benefit? I for one must demand it demonstrated. But will an AI-informed mind even think of that natural requirement of reason?
Totally agree. In my current school leadership role, I’ve had students bypass school-approved tools like Khanmigo or Brisk to use something like Snapchat’s AI to do their math homework. That’s exactly why giving students guided access to AI tools with guardrails is so important, it helps them build AI literacy while still growing cognitively and supporting learning.
The real challenge is what happens outside the classroom, where students are using tools that aren’t designed to support learning, just to give quick answers with no guardrails. There’s no easy fix. But this is the reality our students are living in and stepping into with the exponential growth of AI. We can’t ignore it, but we can shape how they engage with it to the best of our abilities.
I'm watching "Mayday Air Disasters." The critical thinking skills needed to solve a plane crash or even to fly a plane are the skills AI is robbing students of learning😬
Having just concluded TWR 3-12 Intro class yesterday, I can testify to the importance of explicite writing instruction beginning at the sentence level. I think it’s important that the same continuum and language is used across grade levels, just as in reading, so that skills like sentence writing, single paragraph organization and outlining, etc become automatic as students advance from grade to grade. It makes me wonder if too much “ journaling and creative writing” is taking up valuable instructional writing time, in an attempt to get kids comfortable with writing. Wouldn’t automaticity in writing skills make kids comfortable with writing, just as knowing math facts make students comfortable with higher level math?
Question for you, Natalie.
Are you familiar with any early elementary AI literacy tutor products? The district where I teach requires us to use one such program called Amira. Research seems very spotty. The studies demonstrating it's efficacy cited on Amira's website mostly either have the company logo all over, or are 10+ years old and do not directly reference the specific Amira product. I have a funny feeling about it, something seems off/slightly nefarious. Students are to read onscreen stories aloud to basically a chatbot with a 2D onscreen avatar who "helps" them sound out words. It seems fine for very young emergent readers, but I teach 3rd grade so it gets sorta dicey having a chatbot trying to teach/assess comprehension skills and strategies on the completely random passages it provides them to read. Some passages have citations that show what they were taken from. Some don't, which I suspect are AI generated text.
I have been reading and hearing a lot about chatGPT in older grades, but remain really curious to hear your thoughts on AI in the context of reading tutors for K-3 students. Thanks!
Emily, I argue that AI isn’t the issue here. It sounds like there is a huge design flaw in the task in the program. Comprehension is when the reader makes a mental model based on what the text says + the reader’s prior knowledge. Focusing on comprehension skills and strategies like “inferencing” or “finding the main idea” will likely not support understanding of a text. Instead, focus on using text sets based on topical ideas, ask questions about the difficult parts of text and have kids share out loud how they make meaning of those parts, give kids structures and language for expressing their knowledge. You can read more here: https://whitneywhealdon.substack.com/p/reading-misunderstood?r=f8p8m
Whitney, yes, definitely love and agree with everything you're saying! My school district requires us to use the comprehension skills and strategies approach in whole group teaching and built in to the AI tutor, and I am well aware of the issues and have been advocating for change. The conversation I'm most interested in having is about whether an AI "tutor" tool can be effective to support early readers in learning to decode and/or build their knowledge for comprehension. If it is not effective, and since there are many other concerns regarding data privacy, how do teachers respond to being asked to use these tools on a daily basis?
Great questions! Are there expectations from your leadership with how you are to use results from the tutor? Do all kids have to use Amira? Also, how much time are you expected to spend on it? And do you have time to build knowledge and understanding outside of Amira? Asking because it sounds like it would be more about mitigation and reframing the time spent on Amira than getting rid of it given your situation, but curious to learn more.
Big fan of your work, Natalie. The Knowledge Gap had a profound impact on how I think about curriculum and building knowledge for students. I share your concern about the risks of cognitive offloading, but I also believe this is precisely why AI literacy is becoming such an essential and evolving part of our educational practice. With great power comes great responsibility, and generative AI is a powerful tool. When students are taught to critique outputs, revise drafts, and discern when not to use AI, it can sharpen their thinking.
We’re still in the very early stages of figuring out how to integrate AI wisely in schools. It’s messy, evolving, and inevitable. The real challenge isn’t stopping AI, it’s preparing students to use it with intention and integrity. Thank you for sparking and leading such an important conversation.
Yes, we do need to guide students to use AI in a way that supplements rather than supplants learning, but the question is how we can do that. As I suggest in the post, if we give students unfettered access to AI, it may be difficult for many of them to resist the temptation to just have it do the cognitive work for them, even if they know that's actually a bad idea. I've seen quotes from students essentially saying just that.
I read here because I think of myself as an "educator" in the broadest sense: because I think it is our natural human duty to show our fellow human beings our given nature: what is right and wrong etc. and as we naturally would - should - determine value on what is observable in all of us. It is naturally based in teaching children but clearly extensive to showing all our kind to our proper living order. The most disturbing thing to me about AI is that no one seems to have the capacity to place the proper value on it. It is just one more in a long line of artefacts of "progress" ill defined to the evidence of the almost-as-long line of "unintended consequences". There seems to be no proper rational grounds yet, no sufficient reason given, and certainly not objective evidence of the supposed "good" of AI and yet so many are willing to just roll the dice - because its there and thus unavoidable. But is it so? And who dare question those who keep insisting on its benefit? I for one must demand it demonstrated. But will an AI-informed mind even think of that natural requirement of reason?
Totally agree. In my current school leadership role, I’ve had students bypass school-approved tools like Khanmigo or Brisk to use something like Snapchat’s AI to do their math homework. That’s exactly why giving students guided access to AI tools with guardrails is so important, it helps them build AI literacy while still growing cognitively and supporting learning.
The real challenge is what happens outside the classroom, where students are using tools that aren’t designed to support learning, just to give quick answers with no guardrails. There’s no easy fix. But this is the reality our students are living in and stepping into with the exponential growth of AI. We can’t ignore it, but we can shape how they engage with it to the best of our abilities.
Good one, Natalie. I totally agree.
Still trying to get a good sense of which are the cognitive processes that we can offload to technology and which are the ones we should keep?
I'm watching "Mayday Air Disasters." The critical thinking skills needed to solve a plane crash or even to fly a plane are the skills AI is robbing students of learning😬