Blogging for Results in Reading and Writing
Wednesday, May 4, 2011
A Little Knowledge is Not Such a Dangerous Thing
Handbook of Reading Research. Volume 4, p.349
"The construction of mental representations does not involve the application
of precise, sophisticated, context-sensitive rules...." Walter Kintsch.
Comprehension, p.5
When it comes to the role background knowledge plays in reading, the verdict has been in for years now: The more background knowledge readers have about the topic discussed in print, the faster they comprehend and remember what they read. In a way, though, many instructors, including me, have found that insight to be a mixed blessing. The good news was that readers with poor comprehension were probably coping with a lack of background knowledge rather than with an innate inability to process prose efficiently.
The bad news was that the students we were seeing in reading labs and classes pretty much lacked any shred of academic background knowledge. There were lots of things they knew about life and the world, but very little of that knowledge was likely to speed up their understanding of a text describing photosynthesis or the Missouri Compromise.
Almost all the instructors I know, including me, were flummoxed by the magnitude of background knowledge we felt we needed to give students if we were to show them that textbook comprehension was not always going to be arduous and exhausting. Sure we could hand out textbook selections that dispensed a nugget or two of academic wisdom but those few nuggets didn't even half fill a pretty empty bucket. We did our best but still felt our efforts were inadequate, maybe even pointless, largely because students came to us knowing a lot about family dynamics, romantic relationships, pop culture, cell phone plans. etc, but painfully little about the world outside their personal lives.
Yet in retrospect, I have come to think that those of us who didn't think we could ever provide students with enough background knowledge to be useful thought about it in the wrong way. We thought, or at least I did, that unless we gave students the entire lowdown on say the causes and consequences of the Civil War, we were wasting their time. Yet the more I read about the role of background knowledge in comprehension the more I think we were over-estimating our task. (It's also true that the more I read the work of cognitive psychologist Walter Kintsch, the more I am convinced that detailed background knowledge is not the sina qua non of comprehension; it's having the larger framework that really counts)
When researchers talk about background knowledge (See, for instance, "Integrating Memory-based and Constructivist Processes in Accounts of Reading Comprehension,
available here, they are talking about a general framework that allows readers to call up appropriate word meaning or supply the appropriate inferences expected by the author. And general frameworks,with emphasis on the word general, are not that difficult to supply, particularly with the aid of the Web.
Through brief, sequenced assignments, we--and instructors in other departments interested in making sure their assignments are understood-- can supply students with what I call knowledge "snippets" about key academic topics. These snippets can be pieces of text, videos, photos, poems, memoirs, etc. But what they must do is (1) provide a big picture or overview of what a person, theory or event accomplished (2) generally explain why a person, idea or event is considered significant (3) define a key concept essential to understanding the topic addressed and (4) include a relevant visual aid. Students can then use those snippets to create broad knowledge frameworks that allow them to plug in, or as the researchers say, "instantiate" related facts and ideas when they encounter them in their reading.
As the reading researcher Nancy Marshall pointed out years ago, schemas or, the term I prefer, knowledge frameworks "allow an exchange of meaning to occur." They allow the reader to "include new information, to rearrange old information, or to identify old information with new sources." (Available only in print in Comprehension and the Competent Reader, p. 41.)Without these knowledge frameworks,students are in mental free fall, trying desperately to create the larger network or pattern they need in order to categorize new information coming in from their textbooks.
But keeping students from stumbling around in a text without finding any purchase is not so hard as we once may have thought, particularly with the advent of the Web. When students have assignments that confuse them, they can use heading key words, for instance, to get a general sense of what their text is about, and that general sense is probably all they need to develop a schema that will, in turn, improve their comprehension.
Thursday, March 10, 2011
No to Cloze Tests But Yes to Cloze Tasks
In responding to the question, my answer grew so long--a not unusual occurrence in my case--I decided to make cloze testing the subject of a new post because I think my experience with a cloze test illustrates some of its problems (It has to have problems; otherwise how could I do so badly?).
If I scored the passage according to the advice of the cloze procedure's inventor W.L. Taylor--Taylor believed the reader had to fill in the blank with the exact word originally deleted--I got very few right (And no, I am not revealing how many I got wrong or right). If I went with one of the alternatives, which Nielsen suggests i.e. accepting synonyms as correct answers, I got a little over 60% correct, which technically means that the text was within my range of comprehension as long as you accept that method of scoring. My husband also took it, and he got a score of 100, again following Nielsen's suggestion that synonyms were fine.
My humiliating experience on Nielsen's cloze test highlights what are just two of the problems related to using it as a test of comprehension: (1) There's a good deal of disagreement about what a correct answer is (Some people get around the debate by making the tests multiple choice and supplying several possible words or phrases to fill in the blanks, which I think is a good idea) and (2) The reader's background knowledge or lack of it plays a big role in performance. The passage was about Facebook privacy policies. My husband is on Facebook. I am not (which is clearly the reason he scored higher than I did, at least this is what I tell myself)
But there are other problems or issues that are much debated in the literature on cloze. For instance, should the test-maker eliminate every 5th word (I think that's the most common choice). Or should she eliminate every 6th,7th, or 9th word in the "mutilated" passage. (That's how some researchers describe passages with deletions. I mention it here because I find the word choice just hilarious).
Another question is, should the elimination be random or should it focus on deleted nouns versus deleted verbs? Some researchers think it's nouns that are the big meaning carriers. Thus, eliminating them makes the test too hard to be useful. I could go on here, but I think my point is clear: Using cloze testing to determine either ease of reading or rate of comprehension raises an awful lot of complex questions about both test creation and scoring.
All that being said about why not to use cloze tests to determine comprehension or readability, I still think cloze tasks, or exercises, are a great way to sharpen students' sense of how a text builds meaning step by step or sentence by sentence. I also think cloze-based exercises make readers focus on context cues, not just for vocabulary but for overall meaning.
For those reasons,I would definitely suggest including cloze tasks in the classroom. They are a terrific way to get a sense of, for instance, how well or poorly students make use of linguistic cues that tell readers about relationships between ideas. I'm working, for example, on a cloze exercise where all explicit connectives have been eliminated, i.e. transitions, conjunctions and opening adverbial clauses.
Used in this way, I think cloze testing, or more precisely, tasking becomes an excellent and very specifically-focused diagnostic tool. Of course, it also becomes, technically, more a fill-in-the-blank exercise than a formal cloze exercise, where the deletions are usually decided on by a formula, or at least they were when Taylor introduced the cloze procedure in 1953. But Taylor's formula has been fooled with so much since he first published it in the Journalism Quarterly, more than half a century ago, I don't feel compelled to follow it too abjectly, which is pretty much my attitude toward all formulas now that I think about it.
Wednesday, March 2, 2011
Thoughts on Readability Formulas
There is, however, a specific stimulus for this post. In an effort to go paperless in my office, I’m re-reading old journal articles and deciding which ones I want to scan into my online notebook. To that end, I re-read a 1982 article in the Reading Research Quarterly (V.18 #1 p.23) in which the authors described their research on informal reading inventories and commented that using passages with different levels didn't seem to affect students' performance. In other words, as the grade level of the text went up or down based on the readability formulas used, students’ comprehension scores didn't go up or down with them.
The authors then went on to write: “Although readability formulae reflect word difficulty and sentence complexity, they fail to account for one's familiarity with a text Presumably, this failure to control for students' familiarity with reading material diminishes the validity of both commercial and curriculum-based inventories developed with readability formulae."
Despite the resurgence of readability formulas since that article was written (in the 1980s, they were roundly and repeatedly criticized and both the IRA and NCTE discouraged their use in the creation of written materials) I think the authors' suggestion that readability formulas are not fully adequate to the task of revealing how well students might or might not understand what they read is still timely. Actually, as many discussions of readability formulas and their history point out—See, for just one instance, the Plain Language Association website—the formulas were meant to measure ease of reading, not comprehension.
If you find that distinction confusing, you're not the only one. But after pondering it a bit, I think it means a passage coming in at a low grade level, based on a readability formula, could be easily read if you define "reading" as knowing and pronouncing the words. However, that ranking can't tell you whether or not readers can readily grasp the concepts or relationships expressed in the passage. That is, a passage given a low grade level by a readability formula is not necessarily easy to understand. By the same token, passages that earn high grade levels aren’t necessarily hard to read.
Readability formulas don't measure comprehension because they do not take into account key comprehension factors such as the syntactic complexity of the language, familiarity of the vocabulary, reader’s background knowledge, and the text’s conceptual difficulty. Instead they rely mainly on length of sentences and numbers of syllables with some formulas including elements like passive constructions and prepositions.
For instance, a readability formula would treat ennui and boredom as equals because both words have two syllables. Yet the truth is most students would immediately know the meaning of boredom and be dumbfounded by the word ennui. Similarly, inconceivable is shorter than unimaginable, but that certainly doesn't make it easier for student readers to interpret.
Readability formulas also rely heavily on length of sentences to identify ease of reading. Sentence length, though, doesn't tell the whole story. As the web usability guru Jakob Nielsen points out in his discussion of writing for the web, these two sentences are the exact same length but conceptually, they are far from equal:
He waved his hands.
He waived his rights.
As Nielsen correctly says, "everybody understands what the first sentence describes,[however] you might need a law degree to fully comprehend the implications of the second sentence."
Given what I see as the limitations of readability formulas, it's always disconcerting to be asked what readability formula I use to write my textbooks because, in all honesty, I have to say “none.” I use the Flesch Kincaid readability feature of Word strictly as a predictor of potential difficulty. If a passage comes out higher than I think it should be for the book’s audience, I check it for syntactical and linguistic features, known to cause problems, i.e. distance of pronouns from references, embedded clauses, passive constructions, etc. (The Purdue Online Writing Lab has a list of five principles for readability that I find invaluable, available as a PDF or PPT series). And if I really want to cover a topic that consistently comes out with a high, grade level, for instance, passages on the brain with all those pesky references to the multisyllabic word hemispheres, I will classroom test to see how students do with the passages in question.
While I could wax even longer on how readability formulas should be used with extreme caution, I’ll end with a quotation from a study put out by the University of Illinois at Urbana, available on the web at the Eric Clearing House and titled “Conceptual and Empirical Bases of Readability Formulas”:
Problems arise when difficult words and long sentences are treated as the direct cause of difficulty in comprehension and are used in readability formulas to predict the readers' comprehension. Readability formulas are not the most appropriate measure and cannot reliably predict how well individual readers will comprehend particular texts. Far more important are text and reader properties which formulas cannot measure. Neither can any formula be a reliable guide for editing a text to reduce its difficulty.
They study was published in 1986, but to my mind, the sentiments are not the slightest bit dated.
Tuesday, March 1, 2011
For Jordan and Julie
I have, however, been inadvertently shamed into trying again by Julie Williams of Ramapo College who wrote me a really nice e-mail saying that she agreed with what I wrote about the voice on the page, even if it was written a year ago! So within the next day or so, I’ll post on a topic that I’m asked about a lot--readability formulas. And if I dawdle, I’m hoping Jordan and Julie will hold me to my promise and get me moving.
I do want to mention though for anyone who responds, please don't tell me what you think via e-mail. I will just start exchanging e-mails on the subject and, once again, forget about posting anything here, when my goal really is to get an exchange going among people who share my interest in best practices or current problems teaching reading and writing.
Monday, April 5, 2010
The Person on the Page
Probably because they assume I know a good deal more than I do, instructors often write asking me what I think are the most important aspects of critical thinking or reading. I usually answer by identifying two elements of critical reading I have focused on as both a teacher and a writer of textbooks: (1) the analysis of arguments, not just their reasons and conclusions but also their underlying premises and (2) the recognition of an implicit bias that reveals itself through imagery, allusion and what’s left unsaid.
For the record, I still think those are two of the most important topics to cover in a critical reading course. But lately, I’m inclined to add a third element to my admittedly brief list of absolutely critical topics, the notion of the “implied author,” a term used by the literary critic Wayne C. Booth. In the simplest terms, Booth’s “implied author” is not the actual author of the text. It’s the person conjured up by the words on the page. It’s the imaginary author the readers feel they know because they've read the author’s work.
I’ve been thinking about Booth’s concept because of a little book I recently read called “The Sound on the Page” by Ben Yagoda. I really liked the book and agree with Yagoda’s claim that “no truly transparent or anonymous style can exist: the many choices the act of writing requires will sooner or later betray a stance, an attitude, a tone.” In other words, a personality will come through in the writing, sometimes the personality is colorless, tedious, and earnest, as it all too often is in far too many textbooks (In a future post, I’d like to introduce a list of textbooks like Joseph Conlin’s American history text, "The American Past," which manages to be both informative and delightful to read. Textbooks like this one make teaching so much easier). But one way or another, a sense of the person behind the prose is there on the page, even if the writer did his or her best to appear totally objective and totally impersonal (although why anyone would even want to do that is not clear to me).
I think this idea of a personality emanating from the page is a valuable concept for students to consider because it could make them do what both Booth and Yagoda do: Look very closely at a text and pay attention to the many different devices writers use to create the personality they want readers to respond to. Among those things are word choice, use of formal and informal language, alliteration, references (or lack of them) to the self, sentence length, punctuation, imagery, anecdote, choice of simile or metaphor, presence (or absence) of example.
The list is very long, longer than the one I created here. To a large degree—obviously content plays no small role—these are the things that make the writer sound like a particular person, to sound, for instance, like an implied Harold Bloom or Maureen Dowd, to link two writers who couldn’t be more different. Bloom sounds passionately serious as if he emerged from the womb quoting Milton, and Dowd, in print at least, always seems to long for another life as a stand up comic.
What Yagoda also points out, with the help of lots of revealing comments from famous writers, is how writers struggle to evoke and even sometimes override the author they summon with their words. The essayist and novelist Anna Quindlen apparently thinks that she developed her written style from her battle with stuttering, “I think there’s clearly a link between trying to create a charming, erudite and coherent ‘voice’ on the page and being unable to use your voice easily in real life.” The novelist Juno Diaz talks about his ongoing battle with the person who turns up when he starts writing “I have problems saying what I need to say. That’s odd, because my personality tends to be blunt, straight-forward, outspoken. My written personality is nowhere near as dynamic. I have a hundred failed stories in my drawer, and they all have the mark of the writerly person I for some reason need to be.”
I think Booth’s concept and illustrations like those from Yagoda’s book could combine to make students better readers and writers. Students could, for instance, answer the question, “Whom do you want to evoke in the mind of readers when they see your words on the page.” As a group, students could analyze several pieces of writing by writers with a strong style or voice—I think Calvin Trillin is a good choice, Amy Bloom another-- and make a list of the kinds of devices they see these writers using. Trillin likes comic exaggeration, for instance, while Bloom often injects short, direct, personal commnets into her essays and reviews. Once the list is compiled, students could think about how they want to sound on the page and write a paragraph or two that calls up the person they want to be in print.
Tuesday, February 2, 2010
Starting at the Beginning, or God Bless Joseph Williams
When I was in graduate school, I got a “D” on one of my first papers and my professor told me that I should consider dropping out of the program because my writing was incoherent. I was aghast. I had always gotten As in high school and thought I was a good writer, not great but good.
Thus began a long struggle to learn how to write by studying what was considered good writing in academia. Along the way, I must have read—and still read—books on how to write. I found a lot of them useless, filled with general advice about the importance of things like clarity and coherence. That was all fine, but my question was this: What was a concrete way to achieve those things?
At some point in my search, I picked up a copy of Style: Ten Lessons in Clarity and Grace by Joseph Williams and saw what I had been looking for: step-by-step explanations on how to achieve clarity and coherence. In a marvelously straightforward fashion, Williams explained how to use sentence openings to keep readers following along as they moved from sentence to sentence. That advice can be summed up in a German proverb used in the book: “Beginning and end shake hands with each other. “
One key point Williams makes is the importance of sentence openings as directives, guides or, as he says at one point, “orienters” to what follows. In other words, when writers open a sentence with the word “Allegedly,” or “As numerous studies show,” they already alert readers to how the information about to come up should be viewed, the first one with a bit of suspicion, the second as potentially sound evidence.
Just as important, it’s at the beginning of sentences that the writer tells a reader, here’s how this new information arriving in the sentence you are about to read links up with what you’ve just learned from the previous sentence, in other words, how the new sentence shakes hands with the old.
Guided by Williams’s advice, I continue to find new ways to teach reading and writing students about the importance of sentence openings. I have developed a chart identifying the various signals common sentence openers or orienters (the chart goes way beyond the more typical, “for example,” “however”) can supply. Anyone interested in seeing the chart, adding to it, creating their own, or explaining why they do or do not think sentences openings are an important element of teaching reading and writing, please do comment or reply.
Probably later than it should have, it’s now occurred to me that a blog might be a good way to bring others into these online discussions, which, for me anyway, have been incredibly valuable. So every week or so, I’m going to post my thoughts on a topic that I consider really central to the teaching of reading and writing. In every post, I’ll include practical strategies for addressing the topic discussed.
My hope is that other instructors will respond with their thoughts and, over time, we can come up with a repository of teaching methods geared to specific objectives like teaching coherence in writing or using linguistic cues in reading and a host of others.