Scores for Adults Are Dropping on Tests of Basic Skills
As with kids' test scores, a crucial question is the role of knowledge
An international test of adults’ “basic skills” shows that an increasing number of Americans are struggling to do moderately complex tasks that involve reading and math. Unlike tests given to students, these tests attempt to evaluate real-life skills, like whether a person can understand what a list of rules prohibits or calculate how much wallpaper is needed to cover a room—or, at more advanced levels, draw conclusions based on multiple sources or evaluate statistical claims.
Known as the PIAAC, which stands for Program for the International Assessment of Adult Competencies, the test was given in 2023 to a representative sample of adults in 31 countries or regions that are part of the OECD, an organization of relatively developed and affluent jurisdictions. Only a few countries showed improvement since the last administration in 2017. Most had scores that were stagnant or, like the U.S., in decline.
The U.S. was below the OECD average in both literacy and numeracy—as well as in problem-solving, an area that hadn’t been tested before. On the test’s 500-point scale, American adults scored 258 on average in literacy, 13 points lower than in 2017, and 249 in numeracy, 6 points lower.
The data is also reported in terms of “levels.” On average, American adults scored at a level two out of five in both literacy and numeracy. Compared to 2017, the proportion performing at the lowest level or below increased from 19 to 28 percent in literacy and from 29 to 34 percent in numeracy.
A connection with declining scores for kids?
It’s tempting to link the decline in adult scores to the parallel phenomenon in student scores—especially since the U.S. scores are all announced by the same federal education official, Peggy Carr, whose job requires coming up with new ways to describe depressing data.
Carr said the adult scores show there’s a “dwindling middle in the United States in terms of skills,” with “more adults clustered at the bottom”—just as with kids in both reading and math. On the 2023 adult tests, the US, along with Singapore, displayed the “largest skills inequalities in literacy and numeracy” among the countries that participated.
But the connection between scores for adults and those for kids isn’t clear-cut. For one thing, the tests are trying to measure somewhat different things. Beyond that, if recent developments in the U.S. education system were to blame for the decline in adult scores, you would expect to find the steepest decline among younger adults, who emerged from that system more recently.
In fact, though, it was older adults whose scores declined the most. Those aged 55 to 65 went down eight 8 points in numeracy, compared to 7 points for those aged 16 to 24 and 5 points for those aged 25 to 34. In literacy, the oldest adults declined by a full 16 points, compared to 11 and 14 points for the two youngest cohorts. Another complicating factor is that the decline in scores is also happening in other countries with different education systems.
It's hard to draw firm conclusions from international tests, given the multitude of factors that can affect scores (which might include, for older adults, the fact that the 2023 administration didn’t provide the pencil-and-paper option offered in the past). But disparities between countries can nevertheless shed some light on what might work in education and what might not.
Some countries, for instance, had much greater inequality in scores than others. Adults with highly educated parents scored at least 34 points higher than those with less educated parents in countries like Israel, Switzerland, and Hungary, but only 7 points higher in Spain. That suggests that the Spanish education system is reducing inequality. On the other hand, maybe that’s because even the offspring of the highly educated don’t do that well in Spain, which is one of several countries where college graduates failed to perform as well as Finnish high school graduates.
Finland, in fact, came out at the top across the board on the adult tests, and it was one of only a few countries that saw improvements at both the highest and lowest achievement levels. Only 12 percent of Finnish adults scored at level one or below, compared to an OECD average of 26 percent.
That suggests Finland’s education system is doing something right, and some may be tempted to just copy it. But that’s tricky too. Finnish students were at the top on international tests 20 years ago, but their performance has been declining ever since.
Do the tests really measure skills?
To understand what may be driving the declines in scores, along with widening inequality, it’s important to understand what the tests are really measuring. Like standardized reading tests given to kids, the adult tests purport to measure abstract skills. The descriptions of PIAAC skill levels are phrased in terms of the ability to do things like locate information on a page of text, for Level 1, or “construct meaning across large chunks of text or perform multi-step operations in order to identify and formulate a response,” for Level 3.
But the sample PIAAC questions, like those on reading comprehension tests, actually require a certain threshold of knowledge. They don’t ask about specific names or dates, but they assume familiarity with the complex vocabulary and syntax of written language, along with some general knowledge of the world.
One literacy question involves a passage describing how bread and crackers become stale, with the last sentence reading “Crackers seem soft when their moisture level reaches about 9 percent.” The sample “low difficulty” question asks, “At what moisture level do crackers become soft?”
The question may seem straightforward, but it doesn’t use the kind of sentence structure that appears in oral language. People are more likely to say something like, “When do crackers get soft?” Lack of familiarity with complex syntax can be a huge barrier to comprehension.
So can unfamiliar words or concepts. It’s been estimated that comprehension starts to break down when as little as 2 percent of the words in a passage are unfamiliar. The bread-and-crackers text uses a number of words and phrases that aren’t likely to crop up in ordinary conversation, like “exposed to the elements,” “crystallized,” and “retrogradation” (the last of which is explained as a process “in which the starch structure of the bread changes”). If too much of the vocabulary and syntax of a passage or question is unfamiliar, test-takers might never get a chance to demonstrate their reasoning skills.
With numeracy, the situation is somewhat different. To calculate something like the amount of wallpaper needed to cover a room, test-takers need to both recall algorithms they learned long ago and figure out how to apply them. But because the numeracy problems use words, verbal knowledge can also be crucial. One question involving the temperature in a “coolroom” uses measurements in Celsius, and other questions use kilograms, meters, and centimeters. Many American adults are likely to be unfamiliar with these words and their meanings.
Education can reduce inequality
All of this suggests at least a couple of observations. One is that the tests might not reflect adults’ actual reasoning abilities as much as their ability to understand written text. Another is that education could do a lot to reduce the disparities between those who struggle with written text and those who don’t—a disparity that has a lot to do with inequalities in society. Most higher-paying jobs require facility with complex text, a fact that helps explain why the PIAAC data show that high-scorers earn more and are generally healthier and happier.
And acquiring more knowledge is in fact the most reliable way to develop the kinds of skills these standardized tests purport to measure. It’s easier to think critically or creatively about topics you know more about.
That’s not to say that schools should focus lessons on determining when crackers become soft or any of the other specific topics on the test. It’s impossible to predict exactly what vocabulary or concepts people will need to become familiar with throughout their lives to be successful. But what schools can do is build students’ knowledge of a series of specific topics in a fairly logical progression, ideally beginning in the early grades, while also explicitly teaching students how to write about those topics, beginning at the sentence level.
The ultimate goal is for students to acquire a critical mass of general academic vocabulary and a general familiarity with complex syntax, because that kind of general knowledge is what can enable them to read and understand texts on topics with which they’re not already familiar. But teaching those things in the abstract doesn’t work well. People need a meaningful context in order for new information to stick in long-term memory, and an expanding base of knowledge provides that context.
It may be too late to do much to help adults who have already left school acquire the knowledge they need to thrive, but we can at least try to reduce educational inequity in the future by providing, in school, the kind of knowledge that the offspring of highly educated families absorb more or less naturally.
If there's a sure fire solution for teenage apathy, which surely drives these numbers later on, I'm all for it. But here's a rough analogy:
TikTok is to today's teens what AOL was to the Internet in the 1990's. It's the walled garden. And the medium is the message, and culturally, reading ain't it.
I try hard to NOT heed standardized test scores, especially in reading. However, it still stings when my students don’t score as high as I thought they would. Or if they score lower on the winter than in the fall. How much emphasis do you put on standardized test scores when adjusting one’s pedagogical practice?