Danielle M. Moore, Melanie A. Porter, Saskia Kohnen and Anne Castles (2012). Detecting Different Types of Reading Difﬁculties: A Comparison of Tests. Australasian Journal of Special Education, 36, pp 112133
This recent research by Moore, Porter, Kohnen and Castles was an interesting read, once you got past the statistical analysis. As a special educational practitioner the question of what assessment to give students who present with reading difficulties is ever present. For many, myself included, the factors that influence choice are cost of purchase, time taken to administer, ease of analyzing the results and lastly the reliability of the results.
In general, intensive literacy assessment usually takes place due to a discrepancy between a student's IQ and academic performance. This article further highlighted the common mistake many teachers make when they see reading difficulty as a reflection of a students IQ. This article explores the reliability of identifying different reading difficulties through the comparison of three common reading assessment tools: the Wechsler Individual Achievement Test (WIAT) - Second Edition, the Castles and Coltheart Reading Test 2 (CC2) and the Woodcock - Johnson III. My leaning prior to reading this article was towards the Woodcock as it is an assessment tool I use on a regular basis. The implications of this finding are impacted by the sample size the researchers studied but do highlight the fact that the choice of assessment can impact upon a students diagnosis of reading difficulty and subsequent access to support.
You can access this article here if you have a University account or purchase it direct from publisher.
The parameters of the study were set by definitions of what the three assessments sought to identify. These were the students ability to identify words automatically through sight and also the ability to decode new words. Within the paper this is termed the dual model system of students using their lexical knowledge i.e being able to recognise words through a vocabulary and the ability to decode words that they do not recognise to form meaning.
Using the lexical route, a reader is able to rapidly process all words they have previously encountered, including irregular words (e.g., yacht, pint), but will have difﬁculty processing unfamiliar words or nonwords (e.g., jeaph, zickling), as these words do not have a lexical representation. Processing along the non lexical route,a reader will successfully read non-words,unfamiliar and familiar regular words (i.e.,words that obey the letter to-sound rules of English;e.g.,leaf,make). However, irregular words,which do not follow the standard spelling-to-sound
rules, cannot be read successfully via this procedure. Reading irregular words by the
non lexical route will produce an incorrect response, known commonly as aregularisation
error (e.g.,pronouncing pint as if it rhymed with mint;Castles,2006;Coltheart et al.,1993). (p.113)
So based on this dual model assessing students lexical ability the assessment must have sufficient irregular words to assess the students vocabulary. To assess the students decoding ability the assessment must focus on the reading of non- words to ascertain an ability of phonological decoding.
This assessment brings forward the notion of students who struggle with dyslexia. Students can suffer from surface or lexical dyslexia or phonemic dyslexia. It is through the assessment of these two subcategories that diagnosis can be obtained.
The study did outline that word length and the difficulty of the phonemic make up of the words presented in the assessment did impact upon the assessments validity. Non words that were either too short or similar to the real words in which they were derivative of were considered to be too easy for the assessment of difficulty. In terms of assessing phonemic awareness the assessment also had to have a representative number of words with consonant clusters.
The study did have a number of limitations in that it drew on such a small sample that it would be hard to translate the findings that one assessment outperformed the other.
When examining the degree of impairment the CC2 and the Woodcock highlighted greatest difficulty amongst those assessed. It is suggested that this is due to the number and irregularity of words and non words in each sections of the assessments. The greater the level of irregularity the more pronounced the difficulty recorded.
This does have implications for the assessment of students with reading difficulty linked to funding, exam provision and also support in classrooms. This study also highlighted the fact that reading difficulties should not be a measure of a student's IQ
The focus of this paper is on the assessment of the two main processes that children must acquire at the single word reading level: word recognition (lexical) and decoding (nonlexical) skills. Guided by
the framework of the dual route model, this study aimed to (1) investigate the impact of item characteristics on test performance, and (2) determine to what extent widely used reading measures vary in their detection of lexical and nonlexical reading difﬁculties. Thirty children with reading difﬁculties were administered selected reading subtests from the Woodcock-Johnson III, the Wechsler Individual Achievement Test – Second Edition, the Castles and Coltheart Reading Test 2 (CC2),
as well as a measure of nonverbal IQ. Both within-subjects analyses and descriptive data are presented. Results suggest that in comparison to a pure measure of irregular word reading, children with reading difﬁculties perform better on word identiﬁcation sub-tests containing both regular and irregular word items. Furthermore, certain characteristics (e.g., length, similarity to real words) appear to inﬂuence the level of difﬁculty of non-word items and tests. The CC2 subscales identiﬁed the largest proportions of children with reading difﬁculties. Differences between all test scores were of statistical and clinical signiﬁcance.Clinical and theoretical implications are discussed.