How do writers know when their copy is pitched at the right level for their target audience to comprehend?
Is the copy so “dense” that its readers will struggle to get the writer’s meaning? Do readers have to go back over passages more than once to understand what the writer was trying to say? Or is the copy overly simple, almost condescending, for a much more discerning and educated audience?
Why is ‘readability’ important?
There is a formal process to determining where copy sits on the comprehension spectrum. It is known as “readability” and it had its beginnings more than a century ago.
Readability ‘science’ is driven by weighing the complexity of text against the years of education needed to comprehend that text with ease. The average length of sentences, the complexity of words, sentences and paragraphs and overall visual density are all elements that usually come into consideration.
What has ‘years of education’ got to do with it?
For those working in mass media, it’s helpful to know the average education level of the general public. Writers can easily check that data at the United Nations Development Program’s International Human Development Indicators webpage.
Here in Australia, the average number of years of schooling for adults has reached 12 years, up from just 10 years in the mid-1970s, but news writers need to take into account that these results are an average and will include a spread that spans early years school-leavers to multiple doctorate holders.
[Disclosure: A decade ago, I worked with respected university colleague and researcher Grant Dobinson on a longitudinal study that compared the readability of three daily newspapers – one regional, one statewide and one national – at three intervals over a 30-year period. We looked at whether, as levels of education of those working in newsrooms rose, this had resulted in higher readability scores for news and editorial copy. And, if so, whether it had exceeded the increase in years of education in the broader public. It had, especially in editorials.]
How can you measure readability?
Over the past century or so, many different formulae have been devised to measure ease of reading texts, among them:
- The Gunning Fog index
- The Fry readability formula
- The Flesch Reading Ease test
- Flesch-Kincaid Grade Level test
- The Dale-Chall readability formula
- The Simple Measure Of Gobbledygook (SMOG)
- The FORCAST readability formula
- The Coleman-Liau index
- Automated Readability Index
Perhaps the best known – and one of easiest of these readability indices to use – is the Gunning Fog index, which came into use near the end of World War II.
How do you calculate Gunning’s fog index?
For today’s reporters and writers, who may tend to be mathematics-averse, testing the readability of your work doesn’t need to be done frequently, but is useful to test the “density” of your writing from time to time to see if it matches the audience who will be reading your work.
And here is how to apply Robert Gunning’s readability measure longhand (with an example and a handy tip to follow):
- Select a passage (one or more entire paragraphs) that has around 100 words. Do not omit any sentences
- Calculate the average sentence length of the passage by dividing the total number of words by the total number of sentences
- Then count the number of words with three or more syllables (leave out proper nouns, familiar jargon and any words that are compound; also do not count common suffixes – -es, -ed, or –ing – as syllables) and express this as a percentage of the total word length
- Now add the average sentence length and the percentage of complex words
- Finally, multiply the result by 0.4
- This will give you an estimate of the formal years of education needed to comprehend the text on a first reading (e.g., if the result is 12, a Year 12 student will have no trouble understanding the passage; if it’s 15, then you would need to have the equivalent of a bachelor degree-level education – 12 years of high school plus three years at university – to fathom the passage)
Demonstrating Gunning’s fog index in action
Earlier this week I wrote a feature that appears elsewhere on this site. So I’ve grabbed a passage containing 122 words and will now demonstrate how I used the Gunning Fog index to assess its readability (words of three syllables or more are underscored):
Thankfully, we live slightly north of the river system that fell victim to the after-effects of a deadly inland tsunami which eventually reached the coast, flooding a third of the Brisbane metropolitan area.
In the two years since, the region’s rainfall has been mostly above average. And, while earlier this year other parts of the state were seriously affected by summer flooding and severe storms, this region was merely wet … a lot.
La Nina, naturally, has put paid to any of the drought-tolerant plants that could not thrive in the almost continuously damp conditions.
But all that moisture had also fired up the undergrowth in parklands and bush areas, making them greener than they have been for more than a decade.
No. of words (122) ÷ No. of sentences (5) = 24.4 words/sentence
11 of the 122 words (9%) have three or more syllables
(24.4 + 9) x 0.4 = 13.4 years (rounded up from 13.36)
So, to read and understand the passage comfortably on its first pass, my readers ideally would have completed a bit over 13 years of formal education. That’s a little above the average years of education level here in Australia, so perhaps that’s a reminder for me to write slightly less complex sentences.
How does that compare to other sorts of writing?
A British website has a helpful guide that compares (years of education) scores to likely sources of text at the different levels:
The good news is that I also put my passage through a handy online calculator and found the result pretty close (13.5 years). So, for those who feel the longhand method is too tedious, there is an alternative.
How does sentence length play into all of this?
In addition to knowing the years of education ideally needed to comprehend a passage of text, readability science can help writers gauge a comfortable average sentence length for the level of education in their target audience.
Decades ago, Gunning himself recommended that sentences average 15-20 words (and, by the way, today most automated readability calculators work on an average word length of five characters).
Gunning’s suggestion caters for audiences that have amassed at least 10 years of education. With Australians now averaging 12 years of formal education, we could safely lift that range to 18-24 words/sentence. (Phew!)
But isn’t variety the spice of life?
Certainly. Part of the secret to reader comfort over the entirety of a story or chapter is to vary the lengths of sentences. Too many short sentences become boring for readers, too many long ones tires them out. The objective for writers seeking to optimise the readability of their text is to retain readers until the end.
For good reason, then, journalism educators have traditionally encouraged novice reporters to file news stories with their longest sentences well under 30 words, and to keep introductory paragraphs ideally below 25 words.
Another reason to pay extra attention to the word length of introductory paragraphs in the digital age is far more practical.
Since the advent of online news, a website’s home page will typically carry a series of mini-summaries previewing its main stories. Each of these story “snapshots” will have a headline plus its first paragraph or a shortened version of the intro (sometimes this block of text is referred to as a ‘standfirst’).
The room available for these standfirsts is usually limited by design and typography (for instance, it may top out around 150 characters or less). In busy newsrooms, then, writing introductions that fit within those online parameters becomes an important (if sometimes limiting) consideration.