A blog called ‘Does Easy Read work?’ caught my attention. It’s easy to take claims at face value, but checking the original articles reveals a different view.
The article was written by an organisation called IAG. IAG refers to five ‘key’ studies ‘about the use of Easy Read’ for people with intellectual disability.
Three of these studies contributed to my own Easy Read research (Fajardo et al. 2014, Hurtado et al. 2014, Jones et al. 2006). The other two papers looked at symbols. Symbols are usually used for people with moderate-severe communication needs – a different level of need to those who can read texts.
Easy Read research
Only the paper by Hurtado et al. actually mentions Easy Read. IAG interprets Hurtado et al.’s results as evidence participants ‘understood the information’. To work out whether Hurtado et al., and other researchers, really do tell us anything about ‘understanding information’, we have to understand ‘understanding’.
Information is made up of a connection of interrelated sentences, called texts, which are written or spoken. For full understanding, you have to understand the whole text.
Imagine a text is like like a house: the bricks (words) combine to make walls (sentences) which together make a house (the text). To understand a text, the recipient must see (or hear) the house, and go inside.
Seeing is a surface level experience. For understanding, recipients have to process and internalize the information, as if going deep inside the house.
As everyone is different, the inside of every ‘house’ is different. We can look through the window, but we can never know for sure what other people’s interpretations look like.
What’s tested, and how?
Hurtado et al. say they tested understanding of ‘concepts’ using recall and recognition techniques. Hurtado et al. do not explain what they mean by ‘concepts’, but this term usually refers to the ideas words represent.
For example, ‘back’ could represent the reverse side of an object, but it has other meanings too, so concepts are also context dependent.
Recalling words is not evidence of understanding, because recall can occur without any understanding. Even if Hurtado et al.’s research does measure understanding of words, we need to know how each participant understands the sentences and the whole text, before we can claim the overall message had been understood.
Some grammatical (syntactic) structures are easier to understand than others. Jones et al. tested understanding using a standardised test called the TROG (Test for the Reception of Grammar). TROG isolates syntactical (grammatical) elements, to find which structures an individual can understand, and which are difficult.
Jones’ study involved 24 service users with a mild or borderline learning disability. 0% of the 24 participants, for example, understood embedded clauses.
Easy Read guidelines do not explicitly mention embedded clauses, or other types of syntactic complexity. Embedded clauses are common in many Easy Read documents.
‘Short’ vs ‘understandable’
IAG tells us that ‘the studies we looked at do seem to agree on the simplification of the text’. They add: ‘Jones, Long and Finlay state that a general rule in writing Easy Read should be to use lots of short, simple sentences instead of one complex sentence’.
Jones et al. do not state this. In fact, Jones et al. do not use the words ‘Easy Read’ or ‘short’ anywhere in their paper. What the authors actually advise is this:
‘Where individual tailoring is not possible, it would be helpful if those producing the written material could draw on a detailed knowledge base concerning the reading comprehension of adults with learning disabilities … to select vocabulary and grammar that are more likely to be understood.’
Images in Easy Read
IAG reports their images are ‘very well received’, users have a ‘strong recognition of the images used’ and they ‘greatly improve’ understanding.
We need to know what is meant by ‘recognition’, and how this relates to the understanding of words, sentences and whole texts. We also need to know whether service users in different organisations, who are familiar with different images, have similarly strong ‘recognition’ of IAG’s images.
Images can cause confusion
Although IAG concedes Poncelas and Murphy’s findings that symbols do not help understanding, they do not quote Hurtado et al.’s comment:
‘There is a dearth of research objectively examining whether adding pictures actually enhances the comprehensibility of written text’. Hurtado et al. also report other studies which show symbols and pictures ‘did not improve understanding’ and ‘may even contribute to short term confusion’.
Illustrating different types of words
Images can represent objects, but it is harder to represent specific actions, and impossible to represent abstract ideas, and links between ideas (which make up a text), without the recipient already having some understanding of the text.
In Easy Read, images often illustrate the concrete concepts within a sentence, rather than the more difficult, abstract concepts, and relationships between ideas.
There can be a weak or non-existent match between the objects represented in the image, and the words within the sentence. For example, the Makaton sign for ‘good’ may be used without the word ‘good’ occurring in the corresponding sentence. The reader needs to interpret the sentence first, and infer meaning not explicitly stated within the text, to understand the relevance of the image.
There is no evidence, as IAG claims, that ‘these findings suggest that a combination of methods could result in the highest overall comprehension rate’. There is research showing the addition of images reduces comprehension, because of the increase in processing demands.
This is not to say that images do not help – just that they are not particularly effective in the way they are used in Easy Read.
IAG describes their user testing as ‘talking to a small group of people’, and finds users have ‘good understanding of key concepts’.
Reading comprehension requires processing not just the words, but also the sentences, and how these all link together to form a coherent text. It is very difficult to control the many variables that impact on understanding. Everyone’s understanding is unique, and a group discussion cannot give us detailed information on individual interpretations.
To increase transparency and validity in user testing, we need to know what was tested, and how.
A lack of theoretical knowledge forms a barrier to understanding relevant research. For example, it is misleading to apply findings about understanding single words to the effectiveness of whole texts, as word comprehension represents only a small part of the reading comprehension process.
Fajardo et al.’s discussion on connectives is complex, and refers to previous theoretical models. IAG’s claim that the authors attribute difficulty to ‘the number of syllables in each sentence’ is contrary to the paper’s conclusion. Fajardo et al. actually noted their findings were in contrast to approaches which advocate short words and sentences, and advised:
‘A longer sentence in which the link between two clauses is explicitly signalled may be easier to understand than two short separate sentences, if the individual has reached a certain level of knowledge’. (If the individual has not reached ‘a certain level of knowledge’, it is the content that needs to be adapted, not the sentence length.).
IAG also tell us that Fajardo et al.’s participants ‘found it harder to understand sentences the more connectives the sentence contained’. Cognitive, language and reading research tell us something different (as discussed by Fajardo et al.). The researchers conclude that ‘performance level was affected by the type of connective and its familiarity‘, rather than the number of connectives. Other studies, which draw on contemporary reading theory, also suggest reducing word and sentence length can increase comprehension difficulties.
IAG concludes ‘these studies support much of what we know – that short words, short sentences and short paragraphs help.’
In fact, there is very little empirical research specifically on Easy Read. Contemporary research and SLT experience (Speech and Language Therapy) suggests some Easy Read and plain English techniques are contrary to the needs of people with low literacy and communication disability.
Studies which evaluate the accessibility of health literacy, adapted using traditional simplification techniques (including Easy Read and plain English), usually rely on readability formulae (which measure word and sentence length, rather than understanding), and rarely include people with disabilities.
There is also a lack of evidence to support the claim that ‘Easy Read suits a wider audiences, not just people with intellectual disability’.
Without an understanding of the components of language that contribute to difficulty, writers and information providers cannot make informed choices about how to adapt, and have no theoretical guidance on what to test. Adapting information requires specialist, technical knowledge and skills, yet Easy Read has no accredited training, and no quality control.
The Accessible Information Standard, for health and social care in England, requires:
‘Where a need for support from a communication professional is identified, services MUST ensure … that interpreters and other communication professionals are suitably skilled, experienced and qualified. This SHOULD include verification of accreditation, qualification and registration with a relevant professional body.’
Stringent professional standards are set for sign language interpreters and translators. We believe people with language and learning disabilities, who require texts to be adapted for accessibility, deserve the same level of professionalism.
Improving quality and effectiveness
IAG tells us the research shows people with learning difficulties have different levels of understanding, and everyone is unique. Yet Easy Read does not provide guidance on how to adapt information strategically for different levels of understanding, or how to test for different components of understanding.
Some Easy Read and plain English techniques are contrary to contemporary research on low literacy and communication disability.
To improve the quality and effectiveness of accessible information provision, Inklecomms has developed Easier English. Easier English is evidence based and responsive to need. Easier English enables information providers to create accessible information for diverse audiences, and make sense of user feedback, when everyone has different information and communication needs.
Read more about Easier English.