To Make Progress In Reading, We Need To Monitor It Differently

News Room

“Data-driven” instruction makes sense only if the data that’s driving it makes sense. And much of the data used to guide reading instruction doesn’t.

Throughout the school year, classrooms across the country use standardized tests to determine students’ reading levels, identify where they need help, and predict their performance on end-of-year state reading exams. According to a RAND survey, 93% of reading teachers administered some kind of “benchmark” or “interim” assessment during the 2021-22 school year. It’s a $1 billion-plus market.

In an attempt to address pandemic-related learning loss, schools are relying on this data more than ever. Parents, too, often look to scores on these tests as markers of progress.

And yet, there’s very little outside evaluation of whether these assessments do what they purport to do. In 2016, EdReports, an organization that reviews curricula, launched a project to conduct such evaluations. But the company recently suspended the effort because it couldn’t get enough test publishers to participate. Maybe some were wary of what such an evaluation would reveal about their tests’ reliability.

In one study, about a thousand students were given four different standardized tests to determine their reading comprehension ability. On average, only 43% of the children identified by one test as poor readers were also identified as poor readers by another test—and the same thing happened when the tests tried to identify the good readers. In other words, the chances of any two tests putting a particular student in the same category were less than half.

Knowledge and Reading Comprehension

Standardized measures of progress can make sense in some areas—math, for example—as long as the tests match what’s been covered in the curriculum. But, as I’ve explained elsewhere, it’s impossible to assess reading comprehension ability in the abstract, as interim assessments—and end-of-year state reading tests—purport to do.

Reading tests claim to measure general skills like making inferences or finding the main idea of a text, using passages on topics students may or may not be familiar with. But research has shown that the more you know about the topic you’re reading about, the better your comprehension. Standardized tests—including those used to determine reading levels—don’t take account of that. And yet these measures are routinely used to guide instruction and determine what individual students are or are not capable of doing.

Let’s say a passage on a test is about mining, and the child taking the test has never heard of mining. (This is a real-life example I heard from a parent who is also a teacher.) That child might give the wrong answer to a question that asks him to make an inference. His teacher is likely to conclude he needs more practice making inferences, using texts at his “level” on random topics. But evidence from cognitive science indicates that if you’d given that child a passage on a topic he knows well—let’s say, basketball—he would have had no trouble making an inference.

At some point, through learning about a series of topics in some depth, students will acquire enough general knowledge and vocabulary to enable them to read and understand passages on topics they’re not already familiar with—the kinds of passages they’re expected to read on standardized tests. Some students are better able to do that outside of school, usually because they have more highly educated and affluent parents.

Schools can build that kind of knowledge for all students, especially if they begin in the early grades. But the evidence indicates that process can take years. It makes no sense to give children standardized reading comprehension tests every few weeks or months and expect to see progress.

Yet another problem is that some students score poorly on reading tests because they’ve never learned to decipher, or decode, written words. Fourth-graders who scored at the lowest level on a national reading test struggled simply to decode the words, according to one study, even though the test purported to measure comprehension ability.

Many interim reading assessments also purport to test comprehension rather than decoding ability—or combine the two aspects of reading in ways that are hard to untangle. The result is that kids who need help with decoding are often given practice in comprehension skills instead, especially at higher grade levels.

How to Monitor Progress Accurately

If we want to monitor progress accurately—and guide instruction effectively—we need tests that are more specific and that are clearer about what they’re measuring.

First, we need to measure decoding ability separately from comprehension ability. No amount of comprehension instruction will turn a student who struggles with decoding into a proficient reader.

Second, we need to ground “reading comprehension” tests, including interim assessments, in content that has actually been taught. But there are at least two potential problems with that.

One is that most elementary schools aren’t actually trying to teach content. Instead, they’re focusing on comprehension skills in the abstract, year after year, and marginalizing content-rich subjects like social studies and science in order to do so. That’s partly because of the mistaken assumption that reading tests measure those skills.

In addition, many educators have long been trained to believe, contrary to cognitive science, that teaching comprehension skills rather than content is the way to turn students into proficient readers. So changing the system requires more than just eliminating standardized reading tests—although that would help.

Another problem is that many educators apparently don’t even think of testing kids on content they’ve actually learned, at least at the elementary level. That kind of assessment is seen as low-level regurgitation of facts rather than “higher-order” synthesis or analysis.

Ultimately, of course, we do want kids to engage in higher-order thinking, just as we want them to be able to read and understand passages on topics they don’t already know something about. But the only way to enable them to do those things is to give them access to information about a bunch of specific topics—ideally in a logical order—guide them to think analytically about that content, and assess what they’ve learned and how well they can reason about it.

When testing is grounded in content covered in the curriculum, it helps teachers determine what students have or haven’t understood about what they’ve taught, and what their future instruction should focus on. It also helps students learn, through something called retrieval practice.

“Testing not only measures knowledge,” a couple of experts in the area have written, “but also changes it, often greatly improving retention of the tested knowledge.” Eventually, as students retain more and more knowledge—with the help of content-specific testing—their general reading comprehension will improve.

So what’s a teacher to do if a parent insists on knowing her child’s reading level, as one teacher recently told me some parents do? It would help to explain to the parent why those measures aren’t reliable.

And, if the teacher is building the child’s knowledge—ideally through a content-rich curriculum—he could show the parent how the child did on a test grounded in content covered in class. A knowledge-building curriculum should come with assessments that test not only whether students have retained information that has been taught but also whether they can do things like make inferences about it. (The Knowledge Matters Campaign describes several effective such curricula on its website. I serve on the board of its parent organization.)

And if students are also being taught the basics of writing—including how to construct sentences and create linear outlines—having them write about what they’ve learned monitors their progress, reinforces the knowledge they’ve acquired, develops their analytical abilities, and familiarizes them with the complex syntax of written language. These are the kind of interim assessments schools should be using.

A final caveat: At lower grade levels, standardized interim assessments may seem to show that a child is getting better at skills like “finding the main idea.” But texts at lower grade levels generally don’t assume much sophisticated knowledge and vocabulary. When those students reach higher grade levels, where the texts suddenly do assume that kind of knowledge, they may find their supposed skills are no longer enough to enable comprehension. If they lack the background knowledge needed to understand more complex texts, they’ll hit a wall.

And “data-driven” reading comprehension instruction, well-intentioned as it may be, ensures that for many kids, that’s exactly what will happen.

Read the full article here

Share this Article
Leave a comment