Why is so much writing so bad, he asked, and how can we make it better?
One common theory is that bad writing is a deliberate choice by bureaucrats who use gibberish to evade responsibility or by pseudo-intellectuals who want to hide the fact that they have nothing to say. But good people can write bad prose, said Pinker. Another theory suggests that digital media are ruining the language, because we can all recall that in the 1980s, Pinker quipped, “teenagers spoke in coherent paragraphs.”
A better theory is that whereas speaking comes naturally to us, writing doesn’t. “Writing is and always has been hard,” said Pinker. “Readers are unknown, invisible, inscrutable—and exist only in our imagination.”
What can we do to improve writing, then? Some would suggest reading books like The Elements of Style, but among some good advice—such as using definite, concrete language and omitting needless words—is advice that is obsolete or downright baffling. “The problem with traditional style advice,” said Pinker, is that it’s an arbitrary list of do’s and don’ts based on the tastes and peeves of the authors.”
Instead, we should base our writing advice on the science and scholarship of modern grammatical theory, evidence-based dictionaries, cognitive science, and usage. Pinker made a case for classic style, which uses “prose as a window onto the world.” Reader and writer are equals, and the goal of the writer is to help the reader see objective realities. “The focus is on the thing being shown, not the activity of studying it,” said Pinker. The latter is a feature of self-conscious style that contributes to the verbosity and turgidity of academic and bureaucratic writing.
“Classic prose is about the world, not about the conceptual tools with which we understand the world,” said Pinker, who suggested avoiding metaconcepts and nominalizations. But he urges caution on the common advice to avoid the passive voice—especially since the advice itself often uses passive voice while condemning it. “The passive could not have survived in the English language for 1500 years if it did not serve a purpose,” said Pinker. English sentences rely on word order to convey both grammatical information and content. We expect material early in the sentence to name the topic (what the reader is looking at) and later in the sentence to show the focal point (what the reader should notice). “Prose that violates these principles feels choppy and incoherent.”
So “avoid the passive” is bad advice. But why is it so common in bad writing? “Good writers narrate a story, advanced by protagonists who make things happen,” said Pinker, whereas “bad writers work backwards from their own knowledge.
Too much knowledge can be a curse: “When you know something, it’s hard to imagine what it is like for someone else not to know it.” It’s this curse of knowledge that leads to opaque writing. The traditional advice to solve this problem is to assume a reader is looking over your shoulder at what you write. “The problem with the traditional solution is that we’re not very good at guessing what’s in people’s heads just by trying hard,” said Pinker. A better approach is to show your draft to a representative reader, or “show a draft to yourself after some time has passed and it’s no longer familiar.” Rewrite several times with the single goal of making prose more accessible to the reader.
Another battleground in writing are rules of usage, but Pinker said that the “prescriptivist versus descriptivist” paradigm is a false dichotomy. Rules of usage aren’t logical truths and are not officially regulated by dictionaries, he said. They are tacit, evolving conventions. “Many supposed rules of usage violate the grammatical logic of English, are routinely flouted by the best writers, and have always been flouted by the best writers. Obeying bogus rules can make prose worse.”
How does the writer or editor distinguish real usage from those bogus rules? “Look them up!” said Pinker. “Modern dictionaries and usage manuals do not ratify pet peeves,” he said. “Their usage advice is based on evidence.”
In any case, Pinker said, “correct usage is the least important part of good writing,” compared with a conversational classical style, a coherent ordering of ideas, factual accuracy, and sound argumentation.
As I was reading about the stigma of mental illness, I was struck by the lack of a mainstream term to describe the discrimination that arises from that stigma. This void in our everyday terminology is telling: it implies that the oppression people with mental illness face is so commonplace and routine that it doesn’t merit its own label. I submit that until we name it, we can’t effectively discuss it, and the absence of this name makes it easy for many of us to ignore it or deny its existence.
Advocacy and research organizations such as the Mental Health Commission of Canada tend to use the term “mental health stigma,” but I’d argue that finding a single word to describe discrimination against people with mental illness helps put it on par with similar forms of bigotry, including racism and sexism.
Sanism versus mentalism
Two terms that have been proposed to label the discrimination against people with mental illness are sanism and mentalism, which have appeared in legal and social science research circles but haven’t caught on with the public or with mass media. Sanismwas coined by attorney Morton Birnbaum in the 1960s, when he was representing Edward Stephens, a patient with mental illness who claimed he was receiving inadequate treatment. Law professor and mental health advocate Michael L. Perlin has perpetuated the term in legal literature, writing extensively about it since the 1980s. American activist and educator in the psychiatric survivor movement Judi Chamberlin coined the term mentalism in her book On Our Own: Patient Controlled Alternatives to the Mental Health System, published in 1978. Neither sanism nor this definition of mentalism appears in the Oxford English Dictionary (OED).
I strongly prefer sanism, not least because mentalism already carries meaning in many other contexts, including:
the performing arts, where it refers to a magic trick or illusion that makes the performer appear to have extraordinary mental abilities;
philosophy, where it refers to the doctrine that objects of knowledge exist only in the mind; and
psychology, where it refers to areas of study that focus on mental perception, in contrast to behaviourism.
And with mentalist gaining a foothold in pop culture within the name of a long-running TV show, calling out discriminatory behaviour as mentalist would be confusing.
Ableism (attested in the OED in 1981—thus a more recent coinage) has been used to describe discrimination against people with disabilities, including cognitive disabilities, but because mental illness doesn’t necessarily lead to disability, I see value in distinguishing between ableism and sanism.
Embracing the use of sanism in our everyday language lets us better acknowledge the many parallels between it and other –isms (or –isms masquerading as phobias).
Islamophobia and sanism compared
Whenever we hear of an individual committing an act of mass violence, it seems we’re eager to pigeonhole them into one of two categories: Muslim or mentally ill (or sometimes both, as in the case of Michael Zehaf-Bibeau). Muslims are all too aware of our knee-jerk reaction to point the finger at Islamic extremists for all acts of terror. From a Washington Post story after the 2013 Boston Marathon bombing:
As a Libyan Twitter user named Hend Amry wrote, “Please don’t be a ‘Muslim.'” Her message was retweeted by more than 100 other users, including well-known journalists and writers from the Muslim world.
Jenan Moussa, a journalist for Dubai-based Al-Aan TV, retweeted the message “Please don’t be a ‘Muslim'” and added that the plea was “The thought of every Muslim right now.” Moussa’s message was forwarded more than 200 times.
When the perpetrators turn out not to be Muslim, the public is eager find out what kind of mental illness they must have had. Anders Behring Breivik, the Norwegian gunman who took 77 lives in 2011, was branded a paranoid schizophrenic following an initial court-ordered psychiatric review, and although a later review concluded he that did not have schizophrenia, the first diagnosis still made its way into articles and books, often with no corrections or retractions. When Germanwings co-pilot Andreas Lubitz deliberately crashed his plane into the French Alps, killing all 150 people aboard,
[t]he incident sparked headlines such as “Madman in the cockpit” from the Sun newspaper, “Killer pilot suffered from depression” from the Daily Mirror, and “Mass-killer co-pilot who deliberately crashed Germanwings plane had to STOP training because he was suffering depression and ‘burn-out’” and “Why on earth was he allowed to fly?” from the Daily Mail.
roughly 3–5 percent of violence in the United States could be attributed to persons with mental illnesses. Moreover, results of studies from England and New Zealand indicate that in those countries, the percentage of homicides accounted for by persons with major mental illnesses has fallen in recent decades despite policies of deinstitutionalization that have placed more people with severe mental illnesses in the community. Data also suggest that most violence committed by persons with mental illnesses is directed at family members and friends rather than at strangers and tends to occur in the perpetrator’s or the victim’s residence rather than in public places… Thus, while there may be a causal relationship between mental illnesses and violence, the magnitude of the relationship is greatly exaggerated in the minds of the general population.
In fact, people with mental illness are far more likely to be victims of violence: a 2012 meta-analysis of observational studies found that adults with a mental illness were 3.86 times as likely to be on the receiving end of violence compared with adults with no disability.
Automatically attributing mass violence to people with mental illnesses is sanist, completely analogous to the Islamophobia that columnists and advocacy groups are becoming quicker to condemn.
the risk of attempting suicide was 20% greater in unsupportive environments compared to supportive environments. A more supportive social environment was significantly associated with fewer suicide attempts, controlling for sociodemographic variables and multiple risk factors for suicide attempts, including depressive symptoms, binge drinking, peer victimization, and physical abuse by an adult (odds ratio: 0.97 [95% confidence interval: 0.96 – 0.99]).
Among those who are transgender or gender non-conforming, 41% attempt suicide sometime in their lives, according to the National Transgender Discrimination Survey. However, “A supportive environment for social transition and timely access to gender reassignment, for those who required it, emerged as key protective factors,” according to UK researchers.
In other words, homophobia and transphobia exacerbate suicide risk.
Mental illness, particularly mood disorders and substance misuse, is also associated with an increased suicide risk. Risk and Protective Factors for Suicide and Suicidal Behaviour, a 2008 literature review funded by the Scottish government, reported that among the 894 cases of suicide they studied, “the majority of cases (88.6%) had a diagnosis of at least one mental disorder. Mood disorders were most frequent (42.1%), followed by substance-related disorders (40.8%).” It also reported that “risk of dying by suicide in those diagnosed with schizophrenia as 4.9%,” compared with 0.010% to 0.015% in the general population. However, as Simon Davis reports inCommunity Mental Health in Canada, “often [suicide] occurs not in response to symptoms, such as command hallucinations, but when the individual is seeing reality clearly and facing (apparently) a future of diminished prospect and social rejection.”
Much like homophobia and transphobia, sanism—including self-stigma—exacerbates the suicide risk among people with mental illness.
Racism and sanism compared
In the wake of incidents of police violence against members of the black community in the United States, including the deaths of Michael Brown in Ferguson, Missouri, and Eric Garner in New York, activists in and around the #BlackLivesMatter movement have worked to expose the myriad ways racism has become institutionally entrenched:
Poverty: U.S. Census Bureau data show that in 2010, 27.4% of black Americans lived in poverty, compared with a national average rate of 15.1%.
Unemployment: The Bureau of Labour Statistics shows the unemployment rate of black Americans hovering at around 10%—double that of white Americans.
Healthdisparities: According to a Centers for Disease Control brochure, African Americans are 25.4% more likely to die of cancer, twice as likely to die of diabetes, and 30.1% more likely to die of heart disease and stroke, compared with white Americans. Black Americans have a life expectancy 3.8 years lower than white Americans.
People with mental illness experienced a history comparable to that of black Americans, with segregation manifesting as institutionalization, and they are overrepresented in the same contexts:
Poverty: Poverty is both a cause and a consequence of mental illness. A 2013 U.S. study found that having a person with a severe mental illness in your household increases your risk of poverty three-fold.
The unemployment rate of persons with serious mental illness reflects these obstacles and has been commonly reported to range from 70–90%, depending on the severity of the disability. These statistics are particularly disturbing in light of the fact that productive work has been identified as a leading component in promoting positive mental health and in paving the way for a rich and fulfilling life in the community.
Health disparities: People in poor mental health are also likely to be in poor physical health. A combination of psychiatric medications that increase the risk of metabolic syndrome, lifestyle, and socio-economic factors contribute to a mortality ratio six times that of the general population. People with serious mental illness can expect to live 15 to 20 years less than people without a mental illness.
Involvement with the criminal justice system: According to a 2006 U.S. Bureau of Justice Statistics report, people with mental illness represent “56% of State prisoners, 45% of Federal prisoners, and 64% of jail inmates.” The National Alliance for the Mentally Ill’s 2003 survey found that 44% of people with a serious mental illness will have had dealings with the criminal justice system.
Advocates of inclusive and conscientious language use have campaigned to raise awareness of sanism in our communications, suggesting the best ways to write about suicide, for example, and encouraging writers to use “people first” language (that is, “people with a mental illness” rather than “mentally ill people” or, worse, “the mentally ill”). These same guidelines often recommend that people avoid using stigmatizing words like crazy or psycho, but these terms have so become a part of our daily language, not to mention popular culture, that eradicating them from general use is unrealistic.
Idiot, lunatic, and insane were once clinical or legal terms, but they’ve all had their turn on what psycholinguist Steven Pinker calls the euphemism treadmill, where a term becomes more and more corrupted semantically until a new euphemism is needed to take its place. They’ve also lost much of their clinical meaning with widespread use.
These kinds of broad umbrella terms used to describe mental illness may be hard to contain, but where we can make headway is in educating the public to avoid using names of specific mental illnesses to describe personal quirks, as Miley Cyrus did in a 2010 interview, saying, “I’m kind of bipolar in my acting choices because I just want to do a little bit of everything.” In a recent Vanity Fair article, Saturday Night Live alum Will Forte claimed to be “a little OCD” about his shampoo routine, a usage that has also been criticized.
The most difficult sanist language to sanitize may be terms describing substance misuse: we derisively throw around words like junkie, crackhead, and wino without a second thought. Until policy makers fully acknowledge that drug use should be a medical rather than a legal issue, we may find these loaded descriptors hard to eliminate.
A call to action—and articulation
It’s high time sanism entered the mainstream. I call for everyone (and especially journalists, bloggers, and mental health advocates) to look out for it, name it when you see it, and condemn it. Only when we end the stigma will people with mental illness feel comfortable seeking the help they need to keep themselves—and the rest of us—safe.
What struck me most when rebuilding Supply and Services Canada’s plain language guides (Plain Language: Clear and Simple and Pour un style clair et simple) was that the French guides aren’t simply the English guides, translated. Although both guides teach the same underlying principles—understanding your audience and the purpose of your document; planning and organizing your document before writing; achieving clarity at the word, sentence, and paragraph levels; implementing a design that supports readability; and user testing with your intended audience—the differences in content between these guides drove home that plain language is language specific.
“Well, obviously,” you might be thinking. Different languages have different grammar, syntax, and vocabulary, each rife with opportunities for ambiguity that have to be tackled individually. The French guides, for instance, address appropriate usage of the indefinite pronoun “on” in plain language, which isn’t a consideration in English. Studies have also shown a “language-specific rhetorical bias” when it comes to using (and by extension, tolerating) the passive voice.
What’s more, the audiences are likely to be vastly different. Even within Canada, French and English speakers have different cultural backgrounds, and those who have neither French nor English as their first language are more likely to learn English than French, meaning that publications in English have to be more sensitive to the needs of ESL speakers than those in French to FSL speakers. A document in plain French, if translated into English, may no longer be plain.
So, being a bit of a workflow nerd, I wondered where translation best fits into the plain language process. Translators have complained that translation is often an afterthought, not considered until the document in the source language is complete. In many cases, though, given that the clarity of the source text can determine the quality of the translation, working with a fully polished text makes sense. Yet, the inherent differences in audience would imply that, for documents that we know will be available in more than one language, developing separate texts in parallel, from the outset, would most effectively get the message across. This approach would be a nightmare for translation revisers and designers of bilingual documents, however, and it certainly isn’t the most budget-friendly option. Would the most efficient approach be to translate after plain language editing but before design, then do parallel testing sessions for the source and target languages?
If you or your organization translates plain language documents, tell me—what do you do? How well does your system work, and what would you change?
Earlier this year I rebuilt Supply and Services Canada’s eminently useful but out-of-print plain language guides, including the two sixty-page booklets, Plain Language: Clear and Simple and Pour un style clair et simple, as well as the thorough, two-hundred-page Plain Language: Clear and Simple—Trainer’s Guide, which gives trainers the materials they need to run a two-day plain language workshop.
I’d wondered if a French trainer’s guide existed. (My online searches turned up nothing.) Plain language expert Dominique Joseph tracked it down and sent me a copy, which I’ve also rebuilt from scratch. Here is the PDF, free to download. I’ve also uploaded the guide to CreateSpace for anyone wanting to order a hard copy (and for discoverability).
To keep the complete set in one place, I’ve added these links to the original post where I made the guides available.
A million thanks go to Dominique Joseph for finding this French guide, sending it to me, and carefully proofing the rebuilt file.
In 1991, in the heyday of the push for plain language in government, Supply and Services Canada produced a sixty-page plain language writing guide, in each official language, called Plain Language: Clear and Simple and Pour un style clair et simple. According to one of my colleagues, every federal employee at the time got a copy, and the guides were also available for sale to the public. Three years later, the same federal department published the companion volume Plain Language: Clear and Simple—Trainer’s Guide, which, in 220-odd pages, contains all of the materials a trainer might need to lead a two-day plain language course, including
text detailing the steps of (and reasons for) the plain language process,
I found out about these resources when I was volunteering for the PLAIN 2013 conference in the fall and was able to dig through the archives of Plain Language Association International. “People still ask for them all the time,” Cheryl Stephens told me, “but they’re not easy to find.”
She wasn’t kidding. As of right now, on Amazon.ca, one “new” copy of the sixty-page English booklet is available for $94.36; used copies are going for $46.39. I can’t find the French booklet or the trainer’s guide on Amazon at all.
And it’s no wonder they’re so coveted. Despite their age, they are still among the best plain language writing guides that I have come across. The smaller booklets are succinct and easily digestible, and the trainer’s guide is detailed and persuasive. The references are out of date, of course, as is some of the design advice, but otherwise, they remain solid references and are certainly great starting points for anyone hoping to learn more about plain language.
The federal government tweaked Crown copyright in 2013, leaving each department to manage its own copyright, but seeing as Supply and Services Canada no longer exists, I’m going to assume Crown copyright still applies to these publications, meaning that I am allowed to make copies of them as long as I distribute them for free or on a cost-recovery basis.
Before I returned the PLAIN archives to Cheryl, I photographed the pages from all three volumes and have rebuilt them from scratch, replicating the originals as closely as possible, down to the teal-and-purple palette that was so inexplicably popular in the nineties. And here they are:
The PDFs are free to download. I also published them via CreateSpace in case anyone wanted a hard copy (the list prices are set to the lowest allowable and are for cost recovery only) but primarily for discoverability, because within a few weeks of this post, all three should come up in a search on the extended Amazon network. The two little booklets are in colour, which is why they’re a little pricier, but I chose to offer the trainer’s guide in black and white, because the only colour was in the “Tips for trainers” inserts and I didn’t think it was worth increasing the price for just those twenty pages. The PDF of the trainer’s guide has those supplementary pages in colour.
If anyone from the Government of Canada would like to reclaim copyright over these publications, please get in touch. I’m not making any money off of them, of course, and I don’t mind relinquishing my rights over the files, but I would like them to be available.
I don’t know if a French version of the trainer’s guide exists, but if someone has it and would be willing to lend it to me or scan it for me, I would be happy to rebuild it as well. (UPDATE: Dominique Joseph tracked down a copy of the Guide du formateur, and I’ve added the rebuilt file to the above list.)
Thanks to Cheryl Stephens for providing the originals and Ruth Wilson for supplying a couple of pages that I was missing. Huge thanks also to my extraordinary volunteer proofreaders: Grace Yaginuma, who cast her eagle eyes over the English booklet and trainer’s guide, and Micheline Brodeur, who proofed the French booklet and supplied the translation for the descriptive copy on CreateSpace. Finally, a tip of the hat to whoever created these enduringly useful resources in the first place. We owe you a great debt.
UPDATE—July 21, 2014: A million thanks to Dominique Joseph for finding and sending me a copy of the Guide du formateur, proofreading the rebuilt document, and drafting the descriptive copy for CreateSpace.
Veteran plain language advocates Neil James and Ginny Redish shared some eye-opening statistics about web and mobile use at the PLAIN 2013 conference that may prompt some organizations to reprioritize how they deliver their content. In 2013, for example, there were 6.8 billion mobile phones in use—almost one for every person on the planet. Half of the users were using their mobiles to go online. In 2014, mobiles are expected to overtake PCs for Internet use. Surprisingly, however, 44% of Fortune 100 companies have no mobile site at all, and only 14% of consumers were happy their mobile experience. Mobile users are 67% more likely to purchase from a mobile-friendly site, and 79% will go elsewhere if the site is poor.
People don’t go to a website just to use the web, explained Redish. Every use of a website is to achieve a goal. When writing for the web, always consider
purpose: why is the content being created?
personas: who are the users?
conversations: what do users have to do to complete their task?
Always write to a persona, said Redish, and walk those personas through their conversations. Remember to repeat this exercise on mobile, too.
Consider the following areas when creating content:
Words, noted the presenters, are only one element out of seven.
Some basic guidelines
Build everything for user needs
Again, think of who your users are and what they are trying to accomplish. Consider their characteristics when they use your site. Are they anxious? Relaxed? Aggressive? Reluctant? Keep those characteristics in mind when creating your content.
Consider the physical context
Mobiles are a different physical environment compared with a tablet or PC. The screens are smaller, and type and links on a typical website are too small to read comfortably. Maybe soon we’ll have sites with responsive design that change how content is wrapped depending on the device being used to read it, but for now, creating a dedicated mobile version of a site may be the best way to ensure that all users have an optimal experience on your site regardless of the device they use.
Select the best channels
Smartphones, equipped with cameras, geolocators, accelerometers, and so on, are capable of a lot. We need to be creative and consider whether any of these functions could help us deliver content.
Simplify the navigation
Minimize the number of actions—clicks and swipes—that a user needs to do before they get to what they want. “People will tolerate scrolling if they’re confident they’ll get to what they want,” said James.
Prioritize the content on every page
Put the information users want at the top, and be aware that, for a given line length, a heading with more words will have smaller type, which can affect its perceived hierarchy.
Design for the small screen
Pay attention in particular to information in tables. Do users have to scroll to read the whole table? Do they need to see the whole table at once to get the information they need?
Cut every word you can
The amount of information you can put on a website might be seemingly infinite, but for mobile sites, it’s best to be as succinct as possible. Pare the content down to only what users would need.
This interview also appears on The Editors’ Weekly, the Editors’ Association of Canada’s official blog.
A friend of mine was venting to me about his old boss, who used to look over his reports. Whenever his boss found an error, he’d not only circle it but also emphasize his discovery with an exclamation point—a practice that drove my buddy nuts. Encoded within this tiny mark of punctuation was his micromanaging boss’s chiding disapproval: “HEY! THERE’S A MISTAKE RIGHT HERE! WHAT’S WRONG WITH YOU?”
I was relating this story to my good friend Naomi MacDougall, an award-winning designer, and she told me she once had to work with a proofreader whose markup she found “overly aggressive.” We both had a good laugh about that, but the conversation got me thinking: Whereas most of us have switched to editing on screen, a lot of us still proof on hard copy, and our markup is often the only communication we have with a designer, whom we may not know and may never meet. It’s a bit of a weird working relationship—more distant than others in the publication production chain. How can we be sure that our markup isn’t inadvertently pissing off the designer? I asked Naomi to sit down for an interview to talk about some of these issues.
IC: When you mentioned that a proofreader you’d worked with had “overly aggressive markup,” what did you mean by that? What did the proofreader do that rubbed you the wrong way?
NM: Mostly it was the use of all caps and lots of exclamation points at the end of every note. It made me feel as though I was being yelled at. The tone of the markup put me on the defensive.
IC: Are there other things proofreaders have done that you wish they wouldn’t do?
NM: There have been times when the markup hasn’t been clear, and obviously that’s tricky. It’s frustrating to have to sit there and puzzle over what a letter is. Also, occasionally, I feel like the markup has left too much for interpretation. Because we’re often going through these changes quickly, I don’t want to have to be deciphering code.
On the flip side, if something is very obvious in the markup—like if a letter is dropped or a word inserted into a sentence—then you don’t have to spell it out again by rewriting the sentence in the margin. But when there are lots of moving words and punctuation marks in a sentence, it’s really helpful if the proofreader rewrites the sentence in the margin.
I guess what I’m trying to say is that I’d like as much clarity as possible in markup. I’m intelligent, but I’m not a mind reader.
IC: When there’s a problem like a bad break or a widow, would you prefer that the proofreader just point out the problem so that you can find a solution, or would you rather the proofreader suggest a fix?
NM: That’s a good question. In most cases I would say just point out the problem, unless it’s obvious it’s going to be very tricky to fix—then it’s hugely appreciated when the proofreader suggests a fix, especially if it involves cutting or inserting words.
IC: What’s your preference when there are more extensive passages of text that need to be inserted? How long would an insert have to be before it’s better to send you new text in an email rather than writing it in the margin for you to rekey?
NM: I would say I’d want new text for anything longer than one sentence or two short sentences. There’s just more room for error when I have to type a bunch of text. And if you’re moving more than, say, four words around in a sentence, just rewrite the sentence and have me retype it. It takes less time than moving all those words around and making sure they’re all in the right place.
IC: I think you were telling me earlier that different proofreaders approach word substitutions differently. Some mark a word as deleted and then add a caret to show that a word in the margin should be inserted, whereas others just cross out the word in the proof and write its replacement in the margin, without the caret.
NM: Yes, I like the caret. I find it clearer.
IC: It’s a visual cue for the designer to look in the margin.
NM: Exactly. It takes out that second of guesswork.
IC: Which can really add up!
IC: Is there anything else proofreaders do that you really appreciate?
NM: I always appreciate a neat printer, and I always appreciate it when a proofreader uses a bright ink, like red or purple or anything that stands out against the type. Often I’m scanning a page quickly, and if the markup is in pencil or black or blue pen, I tend to miss more of the changes. They don’t jump off the page as easily, so I have to take more time to look at each page closely.
Also, I really appreciate it when the proofreader suggests a global change at the beginning of document if a word is misspelled throughout. It’s so much quicker for me to search and replace these in one go. But I also like it when these words are highlighted in the text so that I can double-check that the change was made and check for reflow, since, during a global change, there’s always the potential for a line to break differently.
IC: Do you ever return communication on the proofs? What kinds of things to you say to the proofreader?
NM: Not often, but if I do it’s almost always a note that a change can’t be made because of reflow issues—mostly to do with hyphenation. And occasionally I’ll make note of a design style that overrules a type change.
IC: We’ve focused on hard-copy markup so far. Any thoughts about proofreaders working on PDF?
NM: I know in some instances I’ve missed smaller fixes in PDFs, like a change to one letter or a punctuation change, because they’ll just show up as tiny, tiny marks, and they’re easy to miss even in the full list of changes. If you click on the markup and add a short comment to it, though, it pops up as a little box, so it jumps out.
PDFs are great for shorter publications; I can copy and paste the text right out of the markup boxes, so that makes my life easy! But for a big book, hard copy is preferable. Having to go back and forth between windows on the computer is the issue.
IC: How much does it annoy designers when we make a change on first proofs and reverse it on second?
NM: It’s not usually a big deal—unless it’s a complete change from Canadian to U.S. spelling throughout, say. If that ping-pongs, then it can get annoying—though I’m sure it is for everyone involved! In that case a note about global changes is hugely appreciated.
IC: What can a proofreader do to ensure that the relationship with the designer is as collegial and productive as it can be, given that it’s such a bizarre, arms-length interaction?
NM: If markup is done professionally, then the relationship will be smooth. Just be clear, be thorough, and print neatly… and no all-caps yelling!
IC: Yes! I think those are all of my questions. Do you have anything to add?
NM: Just that I appreciate how much work goes into a thorough proofread, and I don’t know how you all do it! Sometimes your hawk eyes blow my mind!
This interview appeared in the Winter 2012 issue of Bulletin, the Indexing Society of Canada’s newsletter.
Editor of Indexing Names Noeline Bridge has been an indexer for more than 20 years. She has published numerous articles on indexing and is the co-author of Royals of England: A Guide for Readers, Travellers, and Genealogists. Recently Iva Cheung interviewed her by email about Indexing Names, which was edited by Noeline and was published this year by Information Today [ITI].
IC: What motivated you to compile Indexing Names?
NB: Two interrelated and rather vague thoughts led to the book: that I’d thought on and off over the years that I’d like to write a book on some aspect of indexing; then, publishing articles and making presentations on names, this vague idea turned into a book on names. Also, over the years, other indexers had been producing books about indexing but one devoted to names wasn’t one of them. My conversation with John Bryans at the Information Today booth at a conference was the trigger. I was perusing the books on display, and John remarked that he wished more indexers would write books. I found myself asking, “So you would be interested if I wrote a book on indexing names?” To which he replied, “My response is, ‘When can you get it to me?’” A short time later, a posting on Index-L [an indexing listserv] mentioned the need for a book on indexing names. After drawing a few deep breaths, I responded to say that I was thinking about doing this, knowing I was making a commitment and would be doing it.
IC: How did you find, approach, and select contributors? Did you give them content guidelines?
NB: I’ve always collected listserv postings about names for my presentations and articles, so I went through those looking for expertise and writing skills, and also ASI [American Society for Indexing]/ITI’s books on indexing. As my outline took shape, I dived into the listings of indexers available on the indexing societies’ websites, looking for relevant interests and experience. I thought it would be easy to secure writers and articles, that everyone would have the same reaction I do when asked to write, leaping at the opportunity and producing the article! I was naive. Quite a few people turned me down—nicely, I must add!—but several referred me to others, some of whom agreed, while others referred me to others, and so on, or suggested another relevant topic that ultimately bore fruit. Over time, a few writers dropped out, inevitably and understandably—indexers qualified to write chapters for books are very busy already, and when their lives became complicated by health or family issues, the added burden of writing proved to be just too much for them. A couple of others just never produced their chapters after showing initial interest. For very important chapters I later found substitute writers or included that material in my own chapters. Other ideas, I just had to drop. Seeing how difficult it was to secure writers, I imposed only a few guidelines for fear of putting off potential writers. Enid Zafran, ASI’s editor for their books, wanted substantive material, which I did too. I asked for lots of examples along with background information—historical, where relevant—so that indexers could make informed decisions when examples didn’t match their requirements. I decided to worry about length later, just asking them to write what they wished in the meantime. Editing would come later.
IC: In the book’s introduction, you write that as you worked on the book, its direction changed and that the final product is “not the names indexing encyclopedia that I had envisaged.” What was that initial vision? And if you could add any material to the book now, what would you choose to add?
NB: When the book was a vague idea, I had various equally vague ideas, like some vast compendium of short pieces on names belonging to as many nationalities and ethnicities as possible, or a compilation of all published articles on the subject, or… I wasn’t sure. However, when I approached Enid about the book, quite naturally she wanted an outline as soon as possible. So I had to produce one fast, realizing that only when I had at least a temporary outline could I approach possible writers. I still wanted as many national/ethnic names as I could get, but my compiled listserv messages were often about specific issues regarding names indexing, and names in particular genres of books. So then I came up with the divisions in the book, feeling rather uneasily that it would look like three books in one, and even wondering if I should produce three books. But the latter idea disappeared when I confronted the realities of securing writers, so only the one book was feasible, at least at the time! Outstanding material that I was dearly hoping to include was North American Native names; someone was interested initially but then dropped out, and although I tried hard, I never found a substitute. Others were more Asian names and at least some African ones, a chapter on local history (lots of name issues there!), religious names outside of Christianity (although some of that material was covered in other articles), and, somewhat similarly, European royalty and aristocracy.
IC: What I appreciate about the book is that it offers context and suggestions but isn’t overly prescriptive. It’s a guide, not a strict set of rules. And there is a recurring emphasis on respecting the author and reader in almost all of the contributions. Was that the effect you had hoped for?
NB: I’m glad you noticed and appreciated this aspect. As I mention in the book, I am a former library cataloguer, where we had to use a prescriptive, rules-based approach—as big databases must—to ensure uniqueness and matches for each person’s name. As a freelance back-of-the-book indexer, I came to realize that in this indexing context, genre and reader and authors’ and publishers’ styles often dictate especially how long or short, formal or informal, an indexed name should be. Consequently I changed my terminology from rules to conventions or guidelines in my articles and presentations. Reading the contributors’ chapters expanded my own flexibility and sensitivity to genre, styles, and user issues.
IC: You note in your chapter “Resources for Personal Names” that references are increasingly Web-based. Any plans to turn Indexing Names into a Web resource?
NB: No, I don’t think so. Although many websites remain surprisingly stable, other valuable ones arrive and depart or change their URLs. All URLs have to be checked often, and especially just before publication deadline, a time-consuming process—and frustrating when one tries to discover if the website is now under another name or has simply been pulled. Also, because books have to be finalized many months before publication, at least some URLs aren’t going to be current when the book comes out. Web-based resources are, I think, the stuff of journal articles but not published books.
IC: You wrote the index for Indexing Names—how intimidating was it to compose an index for a book by indexers about indexing?
NB: It was always on my mind that indexers would be using my index and judging it not only by how easily they found needed information but also how I’d structured it. One of my first index users pointed out to me that he’d looked up “stage names” and not found an entry, although there is a chapter on the names of performing artists—a See reference I should have thought of! And perhaps there are others… I shudder to think!