How Generative AI Can Rot Your Brain
Over-reliance on tools like ChatGPT prevents learning and critical thinking.
Years ago, when GPS devices were a new thing, my son, then in college, objected to my use of them. He argued they were a “crutch.” Well, I replied airily, so is a map.
These days, when I’m trying to drive in an unfamiliar area, I’m deeply grateful for GPS, now handily installed on my phone. But, as a recently published paper illuminates, my son had an excellent point. When you read a map, you have to exert some mental effort to figure out how to reach your destination. With GPS, on the other hand, you don’t have to think. It determines your route and tells you what to do at each step.
“Offloading” the cognitive effort of finding your way from Point A to Point B in that way can prevent you from learning the layout of your surroundings—or possibly, as the paper argues, from developing a mental model of how cities are generally laid out. If your phone dies, taking your GPS with it, you’re likely to become disoriented. And if you’re young enough that you’ve never had to read a map because you’ve always had access to GPS, you might not even be able to use one.
The paper, actually a chapter in a forthcoming book, is called “The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI,” and it’s freely available online. It’s the work of a team led by Barbara Oakley, a professor of engineering who is adept at explaining the nuances of cognitive neuroscience to laypeople, including teachers. (For the sake of convenience, I’ll just use “Oakley” in connection with the paper, even though there are four other authors. Carl Hendrick has also written about Oakley’s paper on his Substack, The Learning Dispatch.)
As the title suggests, Oakley’s paper isn’t just about the effects of GPS devices. Over the past few decades, we’ve been increasingly relying on a range of external tools like calculators, the internet, and now generative AI to do our thinking for us. Simultaneously, the education system has turned against memorization, dismissing it as inherently “rote,” while prioritizing supposedly abstract critical thinking skills.
As Oakley argues, those skills actually won’t develop in the absence of substantive knowledge. It’s impossible to think critically about a subject you know nothing about. And the more knowledge you have of a subject the better able you are to think about it critically. Now, in the era of ChatGPT, students are trying to offload even their critical thinking, sometimes with the tacit acceptance or encouragement of their schools.
Evidence that AI Leads to Cognitive Atrophy
When we offload thinking to a device, Oakley argues convincingly, our cognitive abilities atrophy. There’s already mounting evidence of that from cognitive psychology. Studies have shown that when students rely on ChatGPT to study math or computer programming, they use it as a crutch—just as I’ve used GPS—and don’t actually learn as much as students who study the old-fashioned way. Students who rely on ChatGPT for help in writing essays produce better prose but gain no more knowledge than students who get other kinds of help—and they risk becoming “metacognitively lazy,” A systematic review of 14 academic articles concluded that students’ over-reliance on AI of leads to “reduced critical and analytical thinking skills.”
This isn’t something that happens only in schools. A recent study found that “knowledge workers” who rely on AI engage in less critical thinking. And Oakley posits a chilling example of two nurses, only one of whom memorized math facts as a child. When she puts five times ten into a calculator and gets the answer 500, she has the schema—the mental model—to immediately recognize that something is wrong. But her counterpart who lacks that intuitive knowledge blithely proceeds to overdose her patients.
The “memory paradox” that gives the paper its title is that “in an age saturated with external information, genuine insight still depends on robust internal knowledge.”
One of Oakley’s contributions is to provide neuroscience back-up for effects that have already been observed by cognitive psychologists. Psychologists study the mind by observing what happens when, for example, you give one group of students access to ChatGPT and compare them to a control group. Neuroscientists can tell you what’s going on inside the brain that explains what psychologists have observed.
For example, take retrieval practice, which means recalling information that has previously been learned. Hundreds of cognitive psychology studies have demonstrated the benefits of retrieval practice for enabling people to access information in long-term memory. Oakley’s paper explains that retrieving a memory repeatedly reinforces an engram—a physical trace that a memory leaves in the brain—by engaging a neural network.
Personally, I find the evidence from cognitive psychology about the benefits of retrieval practice and of knowledge generally to be sufficiently convincing. But neurological concepts like engrams, along with discussions of the interplay between the hippocampus and the basal ganglia, might help convince those who remain more skeptical than I am.
Education and The Flynn Effect
Another of the paper’s contributions is its explanation of a recent international decline in IQ scores. During most of the 20th century, in a phenomenon known as the Flynn Effect, IQ scores steadily increased. But beginning with people born in about 1975, scores plateaued and then declined, particularly in wealthier nations.
While noting that it’s hard to isolate a cause, Oakley observes that the reversal of the Flynn Effect has coincided with two mutually reinforcing trends that have shifted people away from storing information inside their heads. One is the growing hostility to memorization in Western educational systems. The other is the advent of devices, like calculators and smartphones, that enable students to avoid the cognitive effort required for learning.
We can only imagine what the cognitive effects of generative AI tools like ChatGPT will be in the future. Reports from college campuses, where more and more students are using AI to summarize reading assignments and do their writing assignments, are already alarming. Professors say AI gives students the illusion of knowledge; they feel prepared, based on what AI has told them, but then struggle when asked to interpret or respond to passages independently. Unaccustomed to reading complex text, they say trying to do so only confuses them.
One professor has called generative AI “incredibly destructive to the teaching of university students.” What students are doing, he wrote, is “like going to the gym and asking a robot to lift weights for you.”
Complementing Learning Rather than Replacing It
Oakley acknowledges that using AI can have benefits. In fact, her paper opens with a disclosure that the authors “used artificial intelligence tools … to assist in identifying relevant research literature and refining the clarity and readability of the manuscript.” In any event, AI is already inescapable. While reading Oakley’s paper on my computer, I was frequently confronted with a pop-up suggesting “Ask AI Assistant” or one that said “Save time with a document summary”—even though, as with all academic papers, the authors themselves had provided such a summary in the form of an abstract.
Crucially, like many thoughtful commentators, Oakley urges that AI be used to complement learning rather than to replace it. AI can, for example, provide hints or check student work rather than simply provide an answer.
While that’s true, we can’t just rely on students themselves to draw the line between complementing and replacing learning. Learning requires effort, and for many students the temptation to shift that effort onto a bot will be irresistible. Schools and teachers will need to somehow put guardrails in place.
But fundamentally, as Oakley argues, the education system needs to start seeing factual knowledge as “the glue for high-level thinking” rather than simply “rote trivia.” And that shift needs to happen within the K-12 system in order for the situation to change significantly at the college level.
There are signs of such a shift in at least a minority of American schools. An increasing number of elementary schools are moving away from curricula that foreground supposedly abstract comprehension skills, like “making inferences,” in favor of an approach that focuses on building substantive knowledge. And one high school English teacher, writing in Edutopia, recently suggested strategies to help students overcome “the forgetting curve.” Even she, though, was careful to avoid the dreaded term “memorization.”
“When students see their learning linking to the real world,” she wrote, “they’re not memorizing: They’re weaving new knowledge into their lives.” Fine, whatever—as long as they’re storing the information in long-term memory, where it can support higher-order thinking.
Transferring Findings to the Classroom
As always with research findings, though, it may not be a simple matter to transfer them to the classroom—especially the humanities classroom. For example, Oakley urges educators to encourage students to grapple with problems on their own, but only once they’ve achieved 85 percent “accuracy” during practice. But how do you determine when they’ve reached the crucial 85 percent mark?
If you’re teaching math, you could give students a quiz on a set of similar problems and see if they can answer 85 percent correctly. Even there, though, some are likely to fall below that mark and others above it. And if you’re teaching something like literature, history, or philosophy, it’s even harder to determine what tasks will fall into that “sweet spot” of being neither too hard or too easy.
Oakley also relies heavily on the complementary interaction between two “learning systems,” declarative and procedural. Declarative knowledge can be seen as “knowing that”—for example, that the French Revolution began in 1789. Procedural knowledge is “knowing how”—for example, how to ride a bike. Oakley explains that through learning, knowledge shifts from becoming declarative to procedural. When you’re first learning to multiply five times seven, or to spell a word like school, your knowledge is declarative; you have to think about it. But practice it enough and it becomes procedural and automatic.
That’s true for a lot of knowledge—and we certainly want nurses to have practiced their times tables to the point where they can intuitively recognize when a calculator has made a mistake. But I found myself thinking (as I often do) about writing. How does the idea of a transition from declarative to procedural knowledge apply there?
Writing and Procedural Knowledge
Clearly it does apply to fundamental aspects of writing like letter formation and spelling. But what about figuring out what words to use, how to construct sentences, how to write a summary, and how to plan paragraphs and essays?
Those things will never become completely automatic or “procedural,” no matter how much you practice. (As someone who writes a lot, trust me on that.) At the same time, if students get explicit instruction in things like sentence construction and outlining, with lots of practice and feedback from a teacher, it can make a big difference. It will always be harder to summarize a text on a topic you know little about, as compared to a topic that’s familiar, but if you know how to summarize, you’ll definitely have an advantage.
I would love to see more scholarly attention focused on writing—especially in the age of generative AI. If we make writing less overwhelmingly difficult for students by providing the explicit instruction and guided practice that few currently receive, we might muffle the siren song of ChatGPT. And getting them to write, on their own, about what they’re learning can boost their ability to understand and analyze the complex text they’re expected to read—perhaps even after they’re already in college.
Placing guardrails on ChatGPT and banning phones from classrooms are common-sense measures. But they’re unlikely to be enough to enable students to read as well or think as deeply as their peers in the not-so-distant past. As Oakley’s paper argues, our education system also needs to embrace building students’ content knowledge—and, I would add, teaching them to write about what they’re learning.
Having just concluded TWR 3-12 Intro class yesterday, I can testify to the importance of explicite writing instruction beginning at the sentence level. I think it’s important that the same continuum and language is used across grade levels, just as in reading, so that skills like sentence writing, single paragraph organization and outlining, etc become automatic as students advance from grade to grade. It makes me wonder if too much “ journaling and creative writing” is taking up valuable instructional writing time, in an attempt to get kids comfortable with writing. Wouldn’t automaticity in writing skills make kids comfortable with writing, just as knowing math facts make students comfortable with higher level math?
Question for you, Natalie.
Are you familiar with any early elementary AI literacy tutor products? The district where I teach requires us to use one such program called Amira. Research seems very spotty. The studies demonstrating it's efficacy cited on Amira's website mostly either have the company logo all over, or are 10+ years old and do not directly reference the specific Amira product. I have a funny feeling about it, something seems off/slightly nefarious. Students are to read onscreen stories aloud to basically a chatbot with a 2D onscreen avatar who "helps" them sound out words. It seems fine for very young emergent readers, but I teach 3rd grade so it gets sorta dicey having a chatbot trying to teach/assess comprehension skills and strategies on the completely random passages it provides them to read. Some passages have citations that show what they were taken from. Some don't, which I suspect are AI generated text.
I have been reading and hearing a lot about chatGPT in older grades, but remain really curious to hear your thoughts on AI in the context of reading tutors for K-3 students. Thanks!