AI-Generated Summaries Are a Problem--for Authors and Society as a Whole
If students and general readers rely on simplified and inaccurate versions of texts, they'll miss a lot.
Readers and publishers can now easily get artificial intelligence to produce summaries and simplified versions of texts and books. These AI-generated products may have their uses, but if readers rely on them as substitutes for the originals, that’s a problem not just for authors but for society as a whole.
In a recent post on his Substack, college instructor Mark Watkins wrote about experimenting with AI “reading assistants” in his writing courses for first-year students. His students were thrilled by the assistants’ ability to quickly summarize texts.
But Watkins—who trains other faculty members in AI literacy and is something of an AI expert—found himself perturbed. The summaries, he wrote, were “focused on speed and task completion over nurturing developing skills or honing existing ones.” (My thanks to Scott Newstok, an English professor at Rhodes College, for alerting me to Watkins’ post.)
According to Watkins, similar summarization apps are being heavily promoted on social media and targeted to students. And he says that “for $5 a month, anyone is now able to summarize and query a PDF using Adobe's AI Assistant.”
Apps can also be used to lower the reading level of a text. One of the social media posts reproduced on Watkins’ blog shows a downcast female student who is thinking, “I wish someone could explain this to me like I’m five.”
I tend to be a late adopter of technological innovations, so I just ignore the AI Assistant that pops up when I open a PDF and asks if I’m looking for “key takeaways.” (I didn’t realize I was paying $5 a month for this privilege!) But apparently others, like Watkins’ students, have no such reservations.
It’s not just students looking for a shortcut through their assigned reading that seem to find the idea of using AI appealing. Amazon has been flooded in recent months with AI-generated “summaries” and “workbooks” billed as companions to books that are selling well. Presumably that means there’s a market for these products.
The Washington Post reported on the beginnings of the phenomenon exactly a year ago, on May 5, 2023, and in March it followed up with an article detailing its explosion. I became aware of the issue around the same time, when I got an email from a school administrator who was having her staff read my book, The Knowledge Gap. She asked if I would recommend the workbook that went with it.
What workbook? I soon discovered that not one but two workbooks were being sold on Amazon by publishers I’d never heard of and couldn’t find online. I ordered one out of curiosity. And when it arrived, I was pretty appalled.
Watkins worries that students who rely on AI summaries will lose the ability to engage in “close reading.” How can an AI summary convey the craft, style, and nuance of a short story by James Joyce or Flannery O’Connor, he asks? My book isn’t literary fiction, so I’m not expecting anyone to do that kind of close read. Still, the AI-written workbook omits a lot of nuance—and includes some weird inaccuracies.
Odd Hallucinations and Dry “Key Points”
The most egregious is a sentence in a front section called “How to Use This Workbook.” It promises readers the workbook will “deepen your understanding of Marriage [sic] and its complexities.” That’s undoubtedly a worthy topic, but my book is about education.
Another weird thing: The workbook provides a two-page summary of each chapter, followed by six “Self-Reflection Exercises”—questions at the top of a blank, lined page, that readers are supposed to answer. The first chapter is about elementary education, and apparently that led the bot to assume that the book is aimed at children—which it is not. One of the workbook questions, for example, reads, “How do you think this knowledge gap could affect your ability to participate in civic activities when you grow up? Explain your reasoning.”
Aside from these AI “hallucinations,” the summary isn’t badly written—that’s how you know it’s AI, I guess. But it’s flat and boring. To make the book engaging, I introduced a number of “characters,” real people whose stories would help illustrate the points I wanted to make. I also followed elementary classrooms through a school year, interspersing scenes that I hoped would bring the issues I was writing about to life. The workbook only alludes to those characters and scenes in passing a few times (sometimes mixing them up), and it skims over a lot of other important subtleties. The skeletal “key points” end up being pretty dry and repetitive.
“We are committed to the art of creating exemplary workbooks,” reads a statement from the publisher on the workbook’s Amazon page. “Through meticulous research, we ensure that our workbooks become your most trusted source of knowledge.” I imagine this statement too was generated by AI.
Watkins worries that his college students will uncritically accept AI summaries rather than challenging the text. That’s a risk, but I worry that readers of the workbook will reject its oversimplified and conclusory version of my narrative. Most of those who are likely to read my book aren’t gullible college students. They’re educators who may be quite skeptical of my arguments, since they contradict most of what they’ve been told during and after their training. A few bare “key points” may not be enough to convince them.
Of course, the workbook is billed as a “companion” or “guide” to the book, not a substitute for it. But I suspect that most of those who are likely to buy the workbook are, like Watkins’ students, looking for an alternative to reading the whole thing. (Some districts have organized study groups based on the book, and I’ve been told that it’s assigned in some education courses.) If readers are interested in “self-reflection” questions, they’re better off using the discussion guide that’s included in the paperback version and available for free on my website.
An Old Problem Now on Steroids
To some extent, this is an old problem. Students have been relying on Cliff’s Notes and similar “study guides” for decades, often as an alternative to reading a full text. But those summaries are written by human beings who are in a better position to discern what’s worth including. At least, they used to be written by human beings. Now, who knows?
The idea of adjusting the reading level of a text to suit the reader—and possibly losing some important nuance in the process—is also not new. When I was researching my book, a high school teacher told me that his district was using different versions of history texts to accommodate students at different ability levels. The version for higher-level students might refer to “the founders of our nation,” whereas the one for lower-level students would say, inaccurately, “Ben Franklin started the nation.”
But even if these problems have existed in the past, the AI revolution is putting them on steroids. Watkins argues that “uncritically adopting an AI to read for you is a far more dangerous threat to knowledge acquisition and learning than using AI to write for you.” I’m not sure I agree, but I do think the threats from both AI-assisted reading and writing are serious, and there’s been much less attention focused on the threats that relate to reading.
To the extent that people have focused on the problem, they’ve looked at authors who are, like me, distraught about AI-generated knockoffs. That’s not out of fear they’ll cut into sales of the book—at least, not for me. Rather, I’m concerned (a) that readers will think I wrote the stupid thing myself, or at least approved it, and (b) they’ll come away with a mistaken or shallow understanding of my book.
Unfortunately, unless you’re a famous person who knows the CEO of Amazon, there’s not much you can do to fight back. Copyright law doesn’t help, because it only protects against copying. When I complained to Amazon, they told me—after three weeks—that I should provide specific examples of where the workbook used text I wrote without my permission. But that’s not the problem.
Similarly, my publisher’s lawyer advised that workbooks and AI-generated summaries—which dispense with the “self-reflection question” pretense—generally aren’t actionable because they don’t reproduce enough content from the book itself to qualify as copyright infringements.
The only recourse, my publisher has told me, is for an author to create a workbook of her own. I’m considering that, but it’s not clear to me how serious the problem is. I would hope that anyone who buys an AI knockoff would complain to Amazon, and some apparently have, but someone gave the workbook I ordered five stars!
That workbook is now listed as out of print, and another one has disappeared. But now there’s a new workbook for sale, along with another book that’s billed as a 42-page “summary.” It could be a heads-of Hydra problem: you get rid of one, and two more grow in its place.
Of course, the larger issue is what this burgeoning phenomenon will do to students and the reading public in general. And I wonder if the problem isn’t an outgrowth of our failure to build students’ academic knowledge and expose them to complex text beginning in the early grades. (That failure, for those who are looking for a really quick summary, is the basic topic of my book.)
If we don’t start familiarizing kids with complex vocabulary and syntax when they’re young—through read-alouds, discussion, and explicit writing instruction—and instead limit them to simple texts they can easily read themselves, should we be surprised that when they’re older they get frustrated by books that assume knowledge and vocabulary they may not have? And that their solution is to find a version at their “level”?
And if we teach K-12 students using brief excerpts of longer texts, followed by comprehension questions—in an effort to prepare them for standardized tests—is it any wonder they lack the stamina to read a whole book in college and beyond?
Perhaps the problem is of our own making, and we’re just reaping what we’ve been sowing.
The bigger problem is that a new generation is being trained to rely on "AI" and to regard whatever it delivers as the truth, much as the population at large has been trained to regard the contents of Wikipedia as truth. The Gemini debacle gave us a small glimpse into the political insanity of Google, but few realize that Google's problem is not just in image generation, but in all of the results it returns and suppresses. The frightening thing is what's coming next, which is that Big Tech's Internet censorship regime will declare that their "AI" programs are now the only valid form of "search", and therefore they no longer a need to provide you with old-fashioned search engines. Why sort through your own sources, when the censors at Big Tech can do it for you? READ: Your Coming Brave New World of “Artificial Intelligence”: Your days of having access to search engines are probably numbered: https://daveziffer.substack.com/p/your-coming-brave-new-world-of-artificial
I love reading and feel so sad for those who don’t. There is so much that they bare missing out on.
I hate abridged versions because that seem take out all the uniqueness of a writer’s style and vocabulary. The writing has no “personality” left. The essence that makes the book engaging and unique is gone.