Wednesday, March 2, 2011

Thoughts on Readability Formulas

Although I want to return to the subject of voice in writing, a topic that really intrigues me, this post is on readability formulas, a subject I'm frequently asked about in relation to my textbooks.

There is, however, a specific stimulus for this post. In an effort to go paperless in my office, I’m re-reading old journal articles and deciding which ones I want to scan into my online notebook. To that end, I re-read a 1982 article in the Reading Research Quarterly (V.18 #1 p.23) in which the authors described their research on informal reading inventories and commented that using passages with different levels didn't seem to affect students' performance. In other words, as the grade level of the text went up or down based on the readability formulas used, students’ comprehension scores didn't go up or down with them.

The authors then went on to write: “Although readability formulae reflect word difficulty and sentence complexity, they fail to account for one's familiarity with a text Presumably, this failure to control for students' familiarity with reading material diminishes the validity of both commercial and curriculum-based inventories developed with readability formulae."

Despite the resurgence of readability formulas since that article was written (in the 1980s, they were roundly and repeatedly criticized and both the IRA and NCTE discouraged their use in the creation of written materials) I think the authors' suggestion that readability formulas are not fully adequate to the task of revealing how well students might or might not understand what they read is still timely. Actually, as many discussions of readability formulas and their history point out—See, for just one instance, the Plain Language Association website—the formulas were meant to measure ease of reading, not comprehension.

If you find that distinction confusing, you're not the only one. But after pondering it a bit, I think it means a passage coming in at a low grade level, based on a readability formula, could be easily read if you define "reading" as knowing and pronouncing the words. However, that ranking can't tell you whether or not readers can readily grasp the concepts or relationships expressed in the passage. That is, a passage given a low grade level by a readability formula is not necessarily easy to understand. By the same token, passages that earn high grade levels aren’t necessarily hard to read.

Readability formulas don't measure comprehension because they do not take into account key comprehension factors such as the syntactic complexity of the language, familiarity of the vocabulary, reader’s background knowledge, and the text’s conceptual difficulty. Instead they rely mainly on length of sentences and numbers of syllables with some formulas including elements like passive constructions and prepositions.

For instance, a readability formula would treat ennui and boredom as equals because both words have two syllables. Yet the truth is most students would immediately know the meaning of boredom and be dumbfounded by the word ennui. Similarly, inconceivable is shorter than unimaginable, but that certainly doesn't make it easier for student readers to interpret.

Readability formulas also rely heavily on length of sentences to identify ease of reading. Sentence length, though, doesn't tell the whole story. As the web usability guru Jakob Nielsen points out in his discussion of writing for the web, these two sentences are the exact same length but conceptually, they are far from equal:

He waved his hands.
He waived his rights.

As Nielsen correctly says, "everybody understands what the first sentence describes,[however] you might need a law degree to fully comprehend the implications of the second sentence."

Given what I see as the limitations of readability formulas, it's always disconcerting to be asked what readability formula I use to write my textbooks because, in all honesty, I have to say “none.” I use the Flesch Kincaid readability feature of Word strictly as a predictor of potential difficulty. If a passage comes out higher than I think it should be for the book’s audience, I check it for syntactical and linguistic features, known to cause problems, i.e. distance of pronouns from references, embedded clauses, passive constructions, etc. (The Purdue Online Writing Lab has a list of five principles for readability that I find invaluable, available as a PDF or PPT series). And if I really want to cover a topic that consistently comes out with a high, grade level, for instance, passages on the brain with all those pesky references to the multisyllabic word hemispheres, I will classroom test to see how students do with the passages in question.

While I could wax even longer on how readability formulas should be used with extreme caution, I’ll end with a quotation from a study put out by the University of Illinois at Urbana, available on the web at the Eric Clearing House and titled “Conceptual and Empirical Bases of Readability Formulas”:

Problems arise when difficult words and long sentences are treated as the direct cause of difficulty in comprehension and are used in readability formulas to predict the readers' comprehension. Readability formulas are not the most appropriate measure and cannot reliably predict how well individual readers will comprehend particular texts. Far more important are text and reader properties which formulas cannot measure. Neither can any formula be a reliable guide for editing a text to reduce its difficulty.


They study was published in 1986, but to my mind, the sentiments are not the slightest bit dated.

10 comments:

D. Josten said...

Recently, I have written several tests--all at different grade levels(Flesch-Kinkaid). The types of questions were identical in every test. Even with that degree of consistency among the tests, the grade levels seem to be totally unrelated to performance of students in Developmental Education classes. I believe interest and perhaps prior knowledge have a much, much greater impact.

I also think the clarity of the font and the background color of the page affect student motivation to read the text. I suspect, but haven't tested, that they prefer big print and a white page. Unfortunately, I think length of the text also affects their motivation.

Laraine Flemming said...

I didn't mention reader interest in my post, but I absolutely agree with what you say.

I have had students who seemed as if they could not make their way through a text that six graders could master and then suddenly a topic would come along--in the past one such topic was the O.J. Simpson trial and more recently the resurgence of pirates--and these allegedly poor readers would plow through detailed readings and end up capable of explaining forensic trial evidence or the complicated methods pirates use to ambush and steal tankers. Reader interest counts in comprehension, and it counts for a lot.

As for what you say about the graphics, a lot of the research I have looked at makes the same point: how the material is presented visually contributes to how readily it is understood, and this is another element unaccounted for by formulas.

Since you obviously know about test making and taking, would you advocate greater use of informal reading tests as opposed to professionally standardized ones?

Paula R said...

I tend to share your sentiments about readability formulas, but what do you suggest we use in their place?

Anonymous said...

...and if so many people agree that they are problematic, why are they so popular?

Ruth

Laraine Flemming said...

@Ruth

Ah Ruth, a question I have asked myself numerous times.

I have been reviewing a lot of research from the eighties in which researchers can't shut up about how bad it is to write according to a readability formula and how one should never use it as anything except a predictor of possible difficulty (if that; some of them are pretty negative). What I don't know is exactly when or why the tide changed because I don't see anything coming up after that time to say, April Fool, we got it all wrong, readability formulas really are great.

The closest I have come is one web site that mentions a couple of unnamed studies, which correlated level of comprehension with grade levels derived from formulas.

I will keep researching this, however, because there might be a lot of very good research out there that proves readability formulas do really correlate with comprehension. But right now, I'm inclined to say that will happen when pigs fly.

My guess is Microsoft Word with its easy to access to a readability formula turned the tide. However, I promise to keep looking for studies that offer a solid basis for the revival of readability formulas and will get back to you with what I find, pro or con.

Laraine Flemming said...

@Paula R.

Good question. If you are thinking about how to test out the appropriateness of a new textbook, I would copy a few pages, each one from different chapters and test the passages out on your current batch of students. You can also parcel out different passages to your colleagues and ask them to see if the author does the following:

1. uses sentence openings to orient the reader,
e.g. "After the war ended," As a result of the first setback," "In response to the chain of events" etc.

2. limits the use of pronouns and keeps referents close at hand

3. bases the material on an obvious underlying structure such as problem/solution and generalization/illustration.

4. supplies clear-cut statements of the main idea

5. uses familiar visual cues to highlight information,
e.g. boxes, italics, marginal annotations.

6. limits the use of embedded clauses between subjects and verbs

7. defines potentially unfamiliar vocabulary somewhere on the page.

According to reading expert and researcher John Pikulski, among others, there are around 288 different elements that make or undermine readability in the sense of comprehension. However, I consider the above components to be among the most crucial. I think using them as guidelines can tell you pretty fast if your students will or will not struggle with the text.

That being said, I'm a fan of guided readings, where the instructor helps students grapple with a text by modeling, collaborating, and asking questions. Therefore, I was gratified to read the much-discussed new book Academically Adrift in which the authors say that making reading and writing assignments too easy is a bad idea. Struggling a little bit with a text has cognitive benefits. The trick, of course, is to find the right balance between the difficult- but- doable and the totally impossible and self-defeating. As you must know, it's not easy.

retProf said...

I can't say much about readability formulas, but your hunch about the, possibly pernicious, influence of MicroSoft Word struck a nerve. So, even if it is off-topic: I have been railing against its flagging hyphens in compound adjectives, which I think is the cause why they have disappeared in most articles I read. The result is that sometimes I have to parse a sentence twice, even three times until I have sorted out what the author means.

Example:

first time traveler

Does this refer to a first-time traveler or to a first time-traveller? Context may tell, but why not help the reader by putting the hyphen in so that there is no ambiguity in the first place? Do readability formulas flag missing hyphens meant to increase readability? Actually, I have somewhere an example I found in an academic article where even context does not disambiguate a compound phrase--wish I could find it.

Sorry for the rant--I couldn't help myself when I saw the red flag "MicroSoft Word" (which has more profound problems BTW, but that would really lead us too far away from the topic).

More generally: Do readability formulas pay any attention to punctuation, which can make the difference between something that is easy or difficult to read?

Laraine Flemming said...

@retProf


Given my own penchant for rants, I have to say I thoroughly enjoyed yours, and I share your feeling about hyphens. For the most part, they add clarity, and I wasn't aware that Microsoft was discouraging their use, although I have noticed copy editors were more inclined to eliminate them while I was more inclined to put them in.

As for your question about puncutation, I can't speak for all readability formulas, but I do know the Flesch Reading Ease Formula takes into account colons and semi colons, which I consider an important feature.

One sentence with two independent clauses linked by a colon is no more difficult to understand, I believe, than two short sentences, which make the same point, for example:

Jack Johnson, the first black heavy-weight champion in America, was badly treated: he spent a year in jail on trumped up charges.

Jack Johnson, the first black, heavy-weight champion in America, was badly treated. He spent a year in jail on trumped up charges.

Most traditional readability formulas--at some point I'd like to post about untraditional ones that never gained traction--would rank the first sentence as less easy to read than the second pair of sentences. But in terms of comprehensibility, I don't believe there is any difference.

Laraine Flemming said...

@Ruth aka anonymous

As promised, I've been on the web and looking through the latest edition of the Handbook of Reading Research to see what I can find about current readability studies.

The 2011 volume of the Handbook does not address the subject and so far, while I have found an overabundance of sites ready to sell me packages of reading materials based on readability formulas, I still have not found any current research that discredits or revises the many criticisms that were raised against the formulas in the eighties.

I did, however, re-read George R. Klare's classic discussion of readability formulas aptly titled "Readability." In it, Klare quotes the king of reading researchers, P.David Pearson, who makes my point about writing to readability formulas much more succinctly than I do.

After warning against writing to formula, Klare paraphrases Pearson to say," They [readability formulas used in writing or production of text] will change the score on readability formulas without making a corresponding change in comprehensibility; they may, indeed, reduce comprehensibility if deliberately or even carelessly done." (Handbook of Reading Research p.723, 1984)

This is definitely my experience as both writer and reader, and it's the central reason for my initial post.

Laraine Flemming said...

@Anonymous

Thanks for your interest in the blog. It seems appropriate for me to reply here since I have the feeling that the readability-formula post was the one you were interested in, not the post on Joseph Williams. In any case, my response to you took the form of a brand new posting on cloze tests, which are, as you will see, of great interest to me.

About this blog: For years now, whenever I wanted to test out a new exercise or figure out how I’d like to address a new topic,I’ve been sending out an SOS to teachers I’ve worked with or met at conferences and online and asking them what they thought of my approach or if they had another way of addressing say improving students ability to stay focused while reading on the Web.

Probably later than it should have, it’s now occurred to me that a blog might be a good way to bring others into these online discussions, which, for me anyway, have been incredibly valuable. So every week or so, I’m going to post my thoughts on a topic that I consider really central to the teaching of reading and writing. In every post, I’ll include practical strategies for addressing the topic discussed.

My hope is that other instructors will respond with their thoughts and, over time, we can come up with a repository of teaching methods geared to specific objectives like teaching coherence in writing or using linguistic cues in reading and a host of others.