Guest Blog: Are sight words unjustly slighted?

What is meant by ‘sight word reading’? It’s a term that seems to mean different things to different people, leading to misunderstandings and confusion.  We asked Professor Anne Castles to share with us what the evidence says about sight word reading.

Phonics first and phonics fast. Few would now question this mantra or challenge the view that explicit phonics teaching is at the core of any effective initial reading program. But I have noticed an unfortunate side effect of the increasing acceptance of the primacy of phonics. This is the belief that any literacy activity in the early school years that is not phonics must be harmful to children’s learning. This is understandable: the battle to reinstate phonics has been hard fought and none of us wants to see its benefits diluted. But I think it is important that we not let the phonics focus cloud our judgments about other methods that may further improve reading outcomes for children.

The area where I have particularly noticed this tendency is in relation to the teaching of “sight words”. The view of many seems to be that phonics and sight words are sworn enemies: they, literally, cannot be in the same room together. Teaching sight words is viewed as not only ineffective but also dangerous, causing children to become confused and setting them up with bad reading habits that interfere with their ongoing phonics instruction. But what is the evidence here?

Important first up is to distinguish the teaching of “sight words” from the process of reading “by sight”. The latter refers to a state of the reading system, not a teaching method: As skilled readers, we recognise most printed words quickly and automatically, and gain access to their meanings without needing to rely on overt phonological decoding. A simple piece of evidence for this is that we know that the written word sail means something quite different from the word sale, even though we would not be able to distinguish the two by sounding them out. We can do so because we have a precise memory representation of the spelling of the word sail that is linked with its meaning and pronunciation, and we rapidly access this knowledge upon seeing the word. This is what we mean by reading “by sight” and it is represented as the most proficient level of word reading in all prominent developmental theories, including those of Ehri, Frith and Share. The ultimate aim of reading instruction should therefore be to have children reading as many words as possible in this highly proficient way, and as soon as possible.

But the question is how? What is the optimal way of teaching children in order to bring them to this level of word reading expertise? Certainly, a key piece of the puzzle is phonological decoding itself, and this is why phonics instruction is so important. In his self-teaching hypothesis, Share explicates the reasons why phonological decoding is integral to becoming a skilled word reader: it allows the child to independently translate written words into their spoken form and in doing so to link them with words in their existing oral vocabulary. It also focuses the child on letters and letter clusters within words helping them to establish that knowledge in long-term memory.

But is there more that can be done to promote proficient word reading beyond teaching phonics? Phonological decoding is effortful and, at least in English, doesn’t always produce the right pronunciation. In fact this is true for many of the most frequently occurring words in children’s books (think the, I, said and come). This being the case, it seems reasonable at least in principle to focus some teaching at the word level rather than at the letter-sound level; that is, to target specific tricky words that children are likely to encounter regularly and to focus instruction on ensuring children can quickly and accurately recognise those words. This, in broad terms, is what we mean by sight word teaching (note that this is not related to the widely discredited practice of Whole Language instruction, where the focus is on immersion in texts and contextual guessing rather than intensive exposure to individual words).

So are the concerns about sight word teaching justified? To answer this, we need to address two questions: is it effective, and does it interfere with phonics instruction?

Is sight word teaching effective? The short answer is yes: numerous experimental studies have reported substantial and sustained improvements in children’s word reading following targeted teaching of those words, in both typical (e.g., here, here, and here) and struggling (e.g., here, here, and here) readers. The longer answer is that of course effectiveness varies according to the nature of the teaching, and more research comparing the efficacy of different methods is sorely needed. Nevertheless, sight word teaching that meets the following basic criteria has been demonstrated to be highly effective:

  • It focusses the child on a word’s pronunciation as well as the letters within it and their order. This may involve activities such as having the child write the word, copy it, pronounce it aloud, or say its letter names sequentially.
  • It relies heavily on repetition and feedback. Effective sight word teaching methods expose children to the same words again and again until they are reliably recognised and pronounced, with errors being corrected all along the way.
  • It is targeted at children who can recognise letters and who have some grasp of the alphabetic principle. Teaching sight words to children who have not reached this stage – by encouraging them to identify words by their overall shape or by salient visual features – does not transfer to long-term benefits.

A response I often receive when presenting evidence such as the above is that, yes, sight word training may be effective for individual words, but the benefits are highly specific and do not represent a generalisable skill – that relying on this method would require teaching a child the entire dictionary! A first point to note here is that sight word learning does in fact generalise in various ways: Several studies have shown that children induce letter-sound mappings through their experience with written words (e.g., here and here), and there is even some evidence that learning to spell difficult irregular words can generalise to the spelling of other, untrained irregular words. But also, there is no suggestion that any teaching program should involve only sight word instruction – rather the idea is that it be used judiciously for selected words in parallel with an explicit, systematic phonics program. Solity and colleagues have demonstrated that the combination of knowledge of the 64 most common letter-sound mappings of English, together with familiarity with its 100 or so most frequent words, allows children to independently read 90% of words in texts they typically encounter – putting them very efficiently on the path to reading what they wish without assistance.

Let us then turn to the second key question: does sight word teaching interfere with phonics instruction? If this were the case, we would expect phonics teaching carried out simultaneously with sight word teaching to be less effective than phonics teaching on its own. But there is no evidence that this is so: a recent large intervention study with struggling readers found that children who received mixed phonics and sight word instruction made just as strong gains in their phonological decoding ability as those receiving phonics instruction alone. There was also no evidence from this study that sight word teaching caused struggling readers to become confused or to “unlearn” phonics rules that they had already acquired: children who received an intensive period of sight word instruction immediately following an intensive period of phonics instruction did not show any deterioration in their phonological decoding ability, and in fact continued to show improvements.

A qualification here is that this study was carried out with older struggling readers, all of whom had at least some phonics knowledge. What do we know about beginning readers? A recent study by Shapiro and Solity is highly relevant: They compared the effectiveness of two phonics programs being implemented in the first (reception) year of schooling in the UK: Letters and Sounds, which teaches multiple letter-sound mappings and no sight words and Early Reading Research, which teaches only the most consistent letter-sound mappings plus high frequency sight words. Follow up of reading and phonological awareness outcomes at the end of the second and third year of schooling revealed that the two programs were equally effective, indicating that the presence of sight words did not interfere with phonics learning. In fact, there was a tendency for children with low initial phonological awareness scores to do better with the Early Reading Research program, suggesting that being exposed to multiple alternative sound mappings for the same graphemes, rather than sight words, may have been a source of confusion for these children.

In summary, literacy takes off at the point when children can independently read what they wish for pleasure and learning. The evidence to date suggests that sight word teaching, carried out in combination with a structured explicit phonics program, helps rather than hinders children from reaching this point. A fear of diluting the phonics message, though understandable, is not a sufficient reason for ignoring this evidence. Instead, we must ensure that teachers are made aware of what the science tells us about both phonics and sight words, and that they are provided with the training they need to translate it into effective practice.

Questions and Future Directions

  • What is the minimum level of alphabetic knowledge that beginning readers need in order for sight word teaching to be effective?
  • Which methods of sight word teaching – writing, copying, repeated pronunciation, or sequential letter naming – are most effective?
  • What is the optimal number of sight words to teach at different points in reading acquisition, and with what intensity?
  • Is there a better term than “sight words” that could be adopted so as to reduce confusion about this method of instruction?

Anne Castles is Distinguished Professor in the Department of Cognitive Science at Macquarie University. She is Deputy Director of the Australian Research Council Centre for Excellence in Cognition and its Disorders where she leads the Reading Program.  Follow her on Twitter @annecastles

Share this post:

47 thoughts on “Guest Blog: Are sight words unjustly slighted?

  1. John Walker July 2, 2016 at 10:22 am

    In her blog, Anne Castles asks the question ‘Are sight words unjustly slighted’? Here’s the answer, Anne: Yes! And here’s why:
    I will say at the outset that there are so many ideas and assertions in this blog post that simply cannot be justified, it’s difficult to know where to begin. I won’t, therefore, try and deal with them all, only two or three.
    Firstly, I would question the mantra of ‘phonics first and phonics fast’. It’s only half correct. ‘Phonics first and only’ should be the mantra; ‘fast’ is not possible because the English alphabet code is highly complex. It takes about three years for most children to learn the (roughly) 175 common spellings of the 44 or so sounds in English. It then takes a further four years of exposure and explicit teaching for the 50% of children who are likely to need this kind of explicit teaching if they are to become properly literate and to cope with secondary education (11-18 years). After that, we are further refining and building our understanding and knowledge of the code for the rest of our lives, especially when dealing with new ways of spelling sounds (e.g. the and spelling s that have come into the language through Indian English.
    My second point of disagreement is the implied acceptance by Castles of Coltheart et al’s contention that reading is a dual-route process. The idea that there need to be separate processes is not supported by plenty of other research (see McGuinness, D., Beginning Reading Instruction for chapter and verse). But let’s take the examples Castles cites of ‘sail’ and ‘sale’ and claims ‘we would not be able to distinguish the two by sounding them out’. Why not? The first thing you need to be able to do is precisely to ‘sound them out’. As they are being decoded, the brain is searching the mental lexicon for meaning. Why wouldn’t the two processes take place simultaneously as the word is being decoded? The two homonyms do sound the same – when you’ve decoded them – but the sound /ae/ in the words is spelled differently. If children are taught to segment and blend sounds in words to automaticity and they are taught that and the split spelling represent the sound /ae/, they get to ‘sail’ or ‘sale’ without trouble. At this point, context does the rest.
    Chomsky once implied that the English spelling system was well suited to the language partly because of this feature: there are thousands of homonyms in English and spelling sounds in different ways is one means by which they are ‘distinguished’. Teaching children the different ways of spelling sounds is also generative; teaching individual words, one at a time, is very, very time consuming, it is not generative and many children can’t do it (paired associate learning!). So, the way we answer Castles’s dilemma is to teach all the common ways of spelling the sounds in English over the first three years in school.
    Next, and central to Castles’s argument, is her assertion that ‘phonological decoding… doesn’t always produce the right pronunciation’. Ah, the rock on which so many professors founder! In support, she offers us the words ‘the’, ‘I’, ‘said’ and ‘come’. Again, I ask why? The in ‘the’ is a schwa. If a child is reading, they say /th/ /e/ or /th/ /ee/ and then normalise it. If they are writing, they need to be taught how and when schwas are likely to be a problem and how to overcome the problem. The professor obviously has no idea. I taught my five-year-old grandson how to deal with schwas and he then proceeded to read lots of words on the London tube and to tell me where the schwas occurred! It then took me about five minutes to teach him how to spell them – using a spelling voice when he is writing. Now here’s the thing: if you also teach children that many spellings can represent different sounds in the language and teach them which sounds they represent in a coherent and structured system, the problem evaporates. If you don’t believe that children can understand this idea, draw a circle and ask any four or five-year-old what it can be. They’ll tell you that it can be a circle, a moon, a pizza, a ball… If the spelling can be /o/ in ‘hot’, it can also be /oe/ in ‘go’, /oo/ in ‘to’, or /u/ in ‘come’. The difficulty comes in teaching exactly which sounds it can be. So, the spelling can be /i/. It can also be /ie/ as in ‘tie’. In ‘come’, the spelling (for historical reasons) is spelled and consonant plus is a common way of spelling many sounds at the ends of words (sleeve, some, borne, engine, gauche, route – I could go on.).
    And I could go on! Teaching ‘sight words’ is very dangerous because most teachers are not taught to teach phonics properly. Learning to teach phonics properly enables a teacher to dispense with all the nonsense of ‘silent letters’, ‘magic e’, ‘sight words’, ‘hard sounds and soft sounds’ and so on. Our orientation should be to teach from sound to print and NOT print to sound, to teach the essential skills, to teach children to understand how the code works, and to teach all the common sound-spelling correspondences (for starters).
    If you teach phonics as it should be taught, even though it’s a complex business, you’ll never need to teach all these so-called ‘sight words’. And herein lies the danger: when teachers don’t understand the code, everything quickly becomes an excuse to teach ’sight words’, as well as all the ‘cute’ little tricks that don’t work, and teachers quickly fall back into teaching Whole Language/Look and Say.
    The real problem lies with the professoriate, many of whom also have no idea about how to teach children to read and spell and, furthermore, don’t actually get into a classroom and do it. It’s in the classroom that ideas such as teaching children lots of ‘sight words’ are put to the sword.

  2. Debbie Hepplewhite July 2, 2016 at 12:13 pm

    Hi Anne,

    Thank you for your post which is indeed an important one as the issue of ‘sight words’ is one well worth raising.

    One of the questions you ask is whether there is a better term than ‘sight words’ and I would suggest that there is indeed a better term because ‘sight words’, as you yourself have described in your post, does mean different things to different people. Learning a bank of ‘sight words’ as global shapes, an an ‘initial sight vocabulary’, and prior to children being taught anything about the alphabetic code and its application by synthesising (commonly known as sounding out and blending), gives children the wrong message about how print works. Many children, particularly young learners, will see whole printed words only as meaningless squiggles without any phonics input, and children should not be left to intuit the alphabetic code for themselves.

    I would like to address some of the statements you have made in your piece which refers to research findings. You wrote:

    “They compared the effectiveness of two phonics programs being implemented in the first (reception) year of schooling in the UK: Letters and Sounds, which teaches multiple letter-sound mappings and no sight words and Early Reading Research, which teaches only the most consistent letter-sound mappings plus high frequency sight words. Follow up of reading and phonological awareness outcomes at the end of the second and third year of schooling revealed that the two programs were equally effective, indicating that the presence of sight words did not interfere with phonics learning. In fact, there was a tendency for children with low initial phonological awareness scores to do better with the Early Reading Research program, suggesting that being exposed to multiple alternative sound mappings for the same graphemes, rather than sight words, may have been a source of confusion for these children.”

    You refer to the teaching of ‘Letters and Sounds’ in Reception but then the follow-up findings being after the second and third year of schooling. This is misleading in a number of ways. Did the children receive ‘Letters and Sounds’ only in Reception, or two or three years of ‘Letters and Sounds’? And did the children receive ERR only in Reception or did it continue for the next couple of years?

    I ask because ‘Letters and Sounds’ spans a greater period than Reception. Many schools, for example, are likely to relate the ‘Letters and Sounds’ steps of ‘phase two, three and four’ to Reception (the simple or basic code whereby the 44 or so sounds are introduced systematically but mainly one spelling alternative) and these schools then go on in Year One to ‘phase five’ to re-visit the sounds and introduce the many spelling and pronunciation alternatives. Year Two (the third year of ‘Letters and Sounds’) tends to be associated with some spelling rules and grammar aspects of foundational literacy.

    And then, as ‘Letters and Sounds’ (DfES, 2007) is more realistically a framework rather than a programme as it has no actual teaching and learning resources, so the schools using ‘Letters and Sounds’ as their core programme are, in reality, having to translate the guidance in their own way. There are definitely patterns one can observe in ‘Letters and Sounds’ schools but these don’t necessarily resemble one another. For example, you can have the ‘fun games and activities’ interpretation – perhaps dipping into some commercial mnemonic systems and resources – compared to the mainly ‘mini whiteboard’ practice which can look very different from the plethora of games interpretation. What I am trying to point out here is that one ‘Letters and Sounds’ school can look very different from another. Now this point may well have been clarified in the research comparing ERR with ‘Letters and Sounds’, but not necessarily.

    Then, there are some commercial programmes which have been written to ‘deliver’ the alphabetic code described in ‘Letters and Sounds’. How might these fare when compared to ERR? I am trying to indicate that when it comes to ‘research’, we have some way to go to BUILD on what has been achieved to date with research findings. The commercial programmes are sometimes in danger of being derided precisely because they are commercial or being excluded from current research because they are commercial. An example of this is the increasingly well-publicised Education Endowment Foundation in England. This organisation is heavily funded but precludes commercial organisations from applying for research projects. Research projects funded by the EEF, however, can include the use of commercial programmes/resources – but nevertheless this is all a bit serendipity as to what gets researched and how well the content is understood and described.

    Further, the EER approach of teaching 64 letter/s-sound correspondences of the English alphabetic code is extremely minimal when considering the complexity of the English alphabetic code. Not only do we want young learners to be launched on the road to reading, we must also appreciate the enormity of tackling English spelling. Considering that we have over 44 units of sound to teach and link to letters and letter groups (print to sound for reading, sound to print for spelling) but we also need to find ways of teaching knowledge of spelling word banks where words are spelt with the same sound to letters and letter groups, and also we need to teach those words which are uniquely spelt or have much more unusual spellings. 64 correspondences amounts to less than two spelling alternatives for one sound, and yet most sounds have multiple spellings – not just one or two different spellings.

    Now I want to return to the definition of ‘sight words’. You wrote:

    “Letters and Sounds, which teaches multiple letter-sound mappings and no sight words”

    This is misleading, however, because ‘Letters and Sounds’, and other programmes based on the systematic synthetic phonics teaching principles, do indeed introduce words considered irregular or ‘tricky’ from the earliest stages of introducing the alphabetic code. The guidance, however, is to highlight such words by featuring them, pointing out the parts of the words that are easy to decode and pointing out the ‘tricky’ part of the word. This is very different from the ‘whole global shape’ approach. Your description, however, could be interpreted as thinking there is no emphasis on those common, useful words with unusual spellings which readers benefit from for early reading and spelling application.

    Teachers in England in the infant years nowadays are all introducing the alphabetic code – mainly progressing from ‘simple’ to ‘complex’ code and they also feature those tricky words which are useful too. I just want to end by emphasising again, however, that what phonics provision looks like from school to school cannot be encapsulated in single research studies or summaries whilesoever there is only a broad-brushstroke notion of what phonics provision and tricky-word teaching looks like.

    Warmest regards,

    Debbie

  3. Debbie Hepplewhite July 2, 2016 at 4:19 pm

    For general interest, this is what it states on page 15 of ‘Letters and Sounds: Notes of Guidance for Parents and Practitioners’ (DfES, 2007) about high-frequency words and sight words:

    ‘When and how should high-frequency words be taught?

    High-frequency words have often been regarded in the past as needing to be taught as ‘sight words’ – words which need to be recognised as visual wholes without much attention to the grapheme–phoneme correspondences in them, even when those correspondences are straightforward. Research has shown, however, that even when words are recognised apparently at sight, this recognition is most efficient when it is underpinned by grapheme–phoneme knowledge.

    What counts as ‘decodable’ depends on the grapheme–phoneme correspondences that have been taught up to any given point. Letters and Sounds recognises this and aligns the introduction of high-frequency words as far as possible with this teaching. As shown in Appendix 1 of the Six-phase Teaching Programme, a quarter of the 100 words occurring most frequently in children’s books are decodable at Phase Two. Once children know letters and can blend VC and CVC words, by repeatedly sounding and blending words such as in, on, it and and, they begin to be able to read them without overt sounding and blending, thus starting to experience what it feels like to read some words automatically. About half of the 100 words are decodable by the end of Phase Four and the majority by the end of Phase Five.

    Even the core of high frequency words which are not transparently decodable using known grapheme–phoneme correspondences usually contain at least one GPC that is familiar. Rather than approach these words as though they were unique entities, it is advisable to start from what is known and register the ‘tricky bit’ in the word. Even the word yacht, often considered one of the most irregular of English words, has two of the three phonemes represented with regular graphemes.’

  4. Max Coltheart July 3, 2016 at 2:54 am

    I agree with the position stated by Anne. This position does need to be stated, because it contradicts at least two ideas that one sees claimed. They are:

    (a) That it is too confusing for young children to be simultaneously taught phonics and to recognize by sight (i.e. not by sounding out) a small number of sight words.

    (b) All words can be sounded out by letter-sound rules so why teach any other way of reading i.e. why teach any sight words?

    Anne has referred to evidence that (a) is untrue: young children can successfully learn letter-sound rules whilst simultaneously learning to recognize some words by sight.

    (b) is also untrue. It is the claim that English has no irregular words since it says that all words of English can be translated from print to speech by rule i.e. are regular. I don’t believe that position can be defended.

    Consider the words GOOD MOOD and GOON. Is there a set of rules that when applied to these three words correctly translates the OO for all three? I think the only way to do this is to have a rule of the form:

    OO is /ʊ/ when preceded by G and followed by D; otherwise it is /uː/

    This is unworkable for two reasons.

    First, if one allows rules as complex as this one, then there will be thousands of rules for the child to be taught – an impracticably large number.

    Second, “OO is /ʊ/ when preceded by G and followed by D; otherwise it is /uː/” is not a rule anyway. A rule is only a rule if it applies to a number of instances. But this rule only applies to a single instance (the word GOOD) so can’t be called a rule.
    Knowing this rule is of no general usefulness if it only applies to one word.

    So we need to accept that (b) is untrue i.e. that English does have words that are irregular (unlike many other languages written with the Roman alphabet – Finnish, Spanish, Italian have no irregular words) i.e. English has words whose pronunciation can only be got by learning the pronunciation of the word as a whole.

    • Karina McLachlain July 11, 2016 at 9:51 am

      Dear Professor Coltheart,
      You have not proven that b) is untrue. Your example of the word ‘good’ does not illustrate how individual rules are required to teach exception words. Firstly, because it is not an exception word. The /ʊ/ phoneme represented by ‘oo’ appears in a multitude of words in the English language besides the one that begins with g and ends in d. Take for instance: hood, stood, wood, look, book, shook, cook, chook, hook, nook, rook, took, sook, nookie, hooker, foot, soot, toots, wool, boogy, whoof/woof, poof, hoof, oops, etc. etc. All synthetic phonics instruction programmes teach both oo (book, good) and oo (zoo, mood) as equally important. As it is pretty easy to sound out /g/-/ʊ/-/d/ from these graphemes: g-oo-d and all kindergarten kids that I have taught to read by this method can read the word ‘good’ before the end of the year, please explain why it considered a word that cannot be sounded out in the same fashion as any other regular word can. Even methods with less phonics still teach both oo graphemes as a matter of course.

      Similarly, there a problem with the inclusion of the word ‘good’ in the C&C2 test. The only children that get caught out reading or spelling this word are the ones who have phonological problems, not those who have trouble reading exception words. Next time you are carrying out norming procedures, I suggest that you analyse individual words on the exception word list and compare patterns of correct and incorrect answers with scores on all three word lists as a whole (internal consistency reliability).

      Other words that you might find are very easily read and should be classified as regular words on the CC2 are: deaf, give and bowl. There are two pronunciations for ea (tea) and ea (bread). The second is not unusual at all e.g. dead, deaf, lead, read, abreast, stealth, ahead, health, already, wealth, head, breadth, bedspread, bedstead, cleanliness, cleanser, behead, dreamt, bread, heavy, breakfast, leapt, breast, meant, pleasure, pleasant, deadly, deadlock, deadpan, stealth, dread, dreadful, endeavor, feather, zealous, head, heaven, instead, jealous, leather, meadow, measure, pageant, realm, sergeant, steadfast, stead, steady, spread, sweat, sweater treasure, treasury, thread threat, treacherous, treachery, tread, weapon, weather, lead (the metal). Thus deaf is easily synthesised by combining the phonemes d-ea-f.

      Whilst magic e normally makes the preceding vowel long, this usually does not hold when the consonant in between the preceding vowel is a v. Further e, and this can be considered a rule, when following a v is always silent e.g. give, have, above, shove, absolve, abrasive, approve, captive, extensive, cohesive, defective, delve, insensitive, involve, motive, olive, parve, pensive, salve, valve etc. In general, novice readers may misspell give as ‘giv’ for a short time, but I have rarely seen it misread after kindergarten. In many synthetic phonics programmes, e.g. ReadWrite Inc. both graphemes ‘ve’ and ‘v’ are taught as ways of spelling the phoneme /v/.

      The grapheme ow has two forms of pronunciation: long o – blow, bowl, snow) and that which is the same as ou i.e. brown, cow. Without giving a long word list of the former, it is easy to see that it is straighforward to synthesise the pronunciation of bowl of b-long o-l. if the wrong ow is used, they won’t get the word. Children are then taught to try alternatives, which will then give them the correct one.

      Incorrect analysis of these words as exception words when they are not may have to lead to an exaggeration of you and your colleagues of how many ‘rules’ are actually needed to teach both regular and exception words. As someone at the coal face, I can advise you that there are patterns to English irregularity and if you understand them, they can be used to teach irregularity much more efficiently and reducing the number of words that need to taught by sight to a minimum. Please see my comment below if it passed by the moderator.

  5. Max Coltheart July 3, 2016 at 4:10 am

    And re John Walker’s reply:

    Two questions, John.

    1. You can understand and correctly reply to the printed question “Is SAIL about ships or shops?” You couldn’t do this if you were responding just to the pronunciations of the words since SAIL and SALE have the same pronunciation. And context cannot help here since context does not favour one of these interpretations. So how could you do this if reading comprehension depends on phonology?

    2. Your view is that all written English words obey letter-sound rules and so all can be correctly translated from print to speech by rule. Can you list the rule for OO that when applied to GOOD MOOD and GOON gets the pronunciation of the OO correct for all three, even when they are presented as single words i.e. no context?

    Max

  6. Debbie Hepplewhite July 3, 2016 at 8:59 am

    Hi Dorothy,

    I totally agree. I hope you noted my point above, however, about being precluded from applying for research projects of the Education Endowment Foundation because of ‘commerciality’.

    Considering the conclusions and recommendations of various parliamentary inquiries into reading instruction and early intervention in England, one would have thought that some kind of objective research might have taken the interest of academics/researchers to fully investigate the programmes that qualified for the government’s match-funded phonics initiative (as a result of close scrutiny and matching the teaching principles based on research findings).

    One could argue that the programmes concerned did not, or do not, need to be researched as they already fulfil the core criteria of a high-quality reading instruction.

    It is worrying, though, to note the description of ‘phonics’ on the Education Endowment Foundation and I would argue it is highly misleading and weak with regard to the needs of older learners. If new research projects are not evaluated and compared for content and guidance based on existing evidence-informed literacy programmes prior to the new projects being funded and trialled, then we are in danger of hamsters going around a wheel – getting nowhere.

    I hope you noted my points above about the research findings comparing ‘Letters and Sounds’ with Solity’s ERR programme. It would be interesting to see how ERR compares with a fully-resourced reading instruction programme would it not?

    Further, whilst the Science and Technology select committee inquiry into early intervention endorsed the need for early intervention in literacy, at the same time it lambasted the government for promoting and funding the Reading Recovery programme which was not in line with the recommendations of Sir Jim Rose in his team which was clearly a move away from multi-cueing reading strategies.

    Regardless of various vociferous critics of systematic synthetic phonics being endorsed by the government for use in schools in England, nevertheless, there is a consensus of research findings to date about the dangers of multi-cueing. To date, Reading Recovery remains entrenched in the Institute of Education. Trying to find out how Reading Recovery might have changed in the light of research is impossible in a transparent, public way. What are student-teachers and teachers to make of all these contradictions?

    Clearly there is no joined-up thinking and action regarding research and its conclusions. But you will find a growing number of people internationally who are concerned that it remains ‘chance’ as to the content and teaching that children receive in various countries and in various schools.

    As I described above, one cannot even know what a ‘Letters and Sounds’ school looks like without close scrutiny and an experienced eye-view of the implications of the actual provision school to school.

  7. Kate Nation July 3, 2016 at 12:51 pm

    Thank you Debbie for your clear commentary and important reminder that practice varies considerably from school to school. I wasn’t familiar with the EEF’s description of phonics but having checked this morning (https://educationendowmentfoundation.org.uk/evidence/teaching-learning-toolkit/phonics/), I too have concerns. The paragraph:

    “For older readers who are still struggling to develop reading skills, phonics approaches may be less successful than other approaches such as Reading comprehension strategies and Meta-cognition and self-regulation. The difference may indicate that children aged 10 or above who have not succeeded using phonics approaches previously require a different approach, or that these students have other difficulties related to vocabulary and comprehension which phonics does not target”

    is particularly confused. We know that instruction in addition to phonics is critical for improvements in reading comprehension. Interventions that target oral language and vocabulary lead to meaningful and sustained gains in reading comprehension (for a large scale RCT demonstrating this, see Clarke et al. 2010, reference pasted below, and http://readingformeaning.co.uk). But this is quite different to suggesting that children who are still struggling with basic (word-level) reading skills ‘may’ benefit from metacognition and self-regulation strategies with respect to improving their word-level skills. What evidence is there for this? And yes, some older students who are struggling may well need intervention that targets vocabulary and comprehension, but this is not an alternative to phonics. It is a different kettle of fish altogether. And it is perfectly possible (and surely desirable) to work on oral language skills (targeting vocabulary and linguistic comprehension) at the same time as phonics for word-level reading, whether children are beginning readers in the early years, or older children who are struggling.

    Clarke, P., Snowling, M. J., Truelove, E., & Hulme, C. (2010). Ameliorating children’s reading comprehension difficulties: A randomised controlled trial. Psychological Science, 21, 1106-1116. http://pss.sagepub.com/content/21/8/1106.abstract

  8. Debbie Hepplewhite July 3, 2016 at 1:16 pm

    Hi Kate,

    I’m so pleased that you took the time and trouble to check out the Education Endowment Foundation’s description of ‘phonics’ on its website. You noted the same points as me regarding the paragraph you picked out in your message above.

    I’m sorry to provide a link to a whole (rather challenging) thread, but if you scroll to the seventh message, you can see my informal review of the EEF description of phonics – particularly the paragraph you have mentioned. Note my very similar comments to yours:

    http://www.iferi.org/iferi_forum/viewtopic.php?f=2&t=591&p=963#p963

    I find the EEF limited comments such as ‘four months progress’ without further explanation extraordinarily ‘woolly’ – especially coming from an organisation presenting as corporate, prestigious and about research findings!

  9. Mandy Nayton July 3, 2016 at 3:35 pm

    This certainly has resulted in an excellent conversation about the place of sight words in a high quality early reading program! I don’t intend to add a lengthy contribution (time poor / long list of things I ‘should’ have finished before tomorrow) but feel the need to add a few ruminations ….
    Firstly, I’m not sure about the notion that many in the field view phonics and sight words as sworn enemies. I do think there are many people (and I admit to being one of them) who are concerned about the growing tendency for early childhood teachers to teach (drill) literally hundreds of ‘sight words’ / ‘reading demons’ / ‘heart words’ alongside highly predictable PM readers in the pre-school years – before children even begin a phonics program (if they happen to be lucky enough to be in a school in which phonics is being taught). Neither do I think we should think about phonics as a ‘method’ of teaching that should be taught in complete isolation. I like to quote Jim Rose on this point … we should not think about phonics as a teaching method – rather a ‘body of knowledge’. There are obviously so many aspects (bodies of knowledge?) related to the acquisition of successful reading instruction that need to be considered and taught – print conventions / comprehension / vocabulary / alphabetic knowledge / letter formation /etc.
    Secondly (or perhaps this is my third point??), there is a difference between the teaching of high frequency words (introduced as a component of a phonics program) that enable the students to read connected decodable text …. and the teaching of sight / tricky / ‘heart’ words – that are viewed as irregular (although in reality – only ever partially irregular) and, as a consequence, need to be memorised by ‘sight’. Letters and Sounds – is an example of a program (or teaching structure) that introduces a discrete number of h-f words as a component of each phase (often including words that have P-G relationships that have not yet been taught … eg. ‘the’) – that are then revisited once the constituent P-G relationships are taught. Tricky (irregular) words are also taught. There is a marked difference between this approach and the teaching of 300 sight words to 4 and 5 year olds.
    I realise that this is not the point of the blog but it concerns me a little that there may be some misinterpretation!
    Thanks for the opportunity to comment!

  10. Dick Schutz July 3, 2016 at 6:16 pm

    Regarding “Rules.” The Rules that make all words in English “regular” are not the “orthographic” rules that Max is referring to. Those rules collectively are indeed too convoluted and complex to be useful in reading instruction and of dubious worth when applied to spelling.

    The applicable rules are the “Rules of Grapheme/Correspondence” that constitute the “English Alphabetic Code.” All Alphabetic reading systems are governed by such a Code. The Codes of some languages such as Spanish, Finnish, and Turkish have a small number of Rules [of Correspondence]; not much larger than the number of letters in the alphabet. Such Codes are spoken of as “transparent” and “regular.”

    The Code governing English is just as “regular.” And each Code Rule is just as “transparent.” Due to the history of England and the evolution of the language, the English Code consists of 175ish Rules of Correspondence. [The “ish” depends on which investigator is counting.] The English Code is spoken of as “opaque,” but that applies to the person doing the looking, not to the Code.

    Discourse involving the Code is muddied because psychologists speak of the “Alphabetic Principle” and educationists speak of “Phonics”; the history and substance of the Code gets lost in the fog of schooling.

    The Code is an important consideration in schooling, because teaching children how to read Finnish/Spanish/Turkish/etc. Code is a matter of a few weeks/months. Teaching how to read English is a matter of the “reading wars” that has yet to be untangled.

    Re “research.” Yes, “more research is needed,” but ever twas so, and forever will be. Actually, “good data” are currently available, but have yet to be analyzed. The reading initiative in England (under the banner of “Systematic Synthetic Phonics) constitutes the ingredients of a Natural Schooling Experiment–with a reliable Alphabetic Code [Phonics] Screening Check as the Dependent Variable and the instruction provided in LEAs, Schools, and Classes through Years 1 and 2 the Independent Variable. Analysis to date shows steady improvement at the National level over the years 2011-2015, and wide variability in results at the LEA level. All indications are that the LEA variability is a function of instruction differences rather than demographic /biographic differences. Data at the school and class level are “there,” but they have yet to be analyzed.

    The Screening Check is applicable at any age to identify individuals who need further instruction in handling the English Alphabetic Code. It’s quick, “no-cost” and unobtrusive. I liken it to Snellen Eye Test exam used in driver licensing. That is there are “better measures” and there is “more involved,” but it “gets the job done.” My understanding is that steps are being taken in Australia to administer the measure, but I don’t know anything about the details.

    Labeling distinguishing words as “sight” “decodable” “tricky” “regular” “irregular” “phonetic,” does little to clarify reading instruction and has not led to any diminution of the skirmishes involving the terms. Seems to me that the methodology of Natural Schooling Experimentation is a “better way forward.”

  11. John Walker July 3, 2016 at 7:58 pm

    I’ll try and go some way to answer both Anne’s and Dorothy’s responses.
    In March I published a post on my blog (http://literacyblog.blogspot.co.uk/2016/03/the-best-that-we-can-be.html) giving the results of a spelling test taken by 29 children in one of the most deprived schools in England, St George’s Church of England Primary School in Wandsworth. Every child was ahead of their chronological age on the spelling test, 27 of them by double figures that, if you care to read the post, will make your eyes water. This school is situated in the most deprived area of south London and has over fifty per cent of its children on free school meals. In the Phonics Screening Check last year every single one of these children passed the Phonics Screening Check. And guess what, the same school did it again with this year’s Year 1.
    These figures pretty much replicate the data (http://literacyblog.blogspot.co.uk/2016/03/st-thomas-aquinass-remarkable-results.html) we collected on the children of St Thomas Aquinas, where we piloted our programme. At the end of three years, of the fifty children in the pilot group, 49 were ahead of their chronological ages, on the same spelling test, most by the same staggering margins. And, we ran the results of their SATs tests in reading and writing alongside the results. The correlation between the two sets of data is obvious. We also have similar kinds of data from other schools.
    So, Dorothy, it is enormously frustrating to be told that we need RCTs, when, firstly, Sounds-Write can’t for obvious reasons run them itself, and, secondly, no-one in the academic community or the government here in UK is willing to step forward. You, Dorothy say, ‘bring it on’ and I say to you, Yes! Bring it on: in any English-speaking country, any time.
    For anyone who’s interested, between 2003 and 2009, we collected spelling test results on 1607 pupils throughout their first three years in primary school. The schools tested the children and reported the data back to us. You can read it here: http://sounds-write.co.uk/docs/sounds_write_research_report_2009.pdf
    It may not be the gold standard RCT, but if you trouble to look at the enormous detail the compiler of this data, David Philpot, a (now retired) educational psychologist and mathematician, amassed, you should find it very interesting.

  12. Dick Schutz July 4, 2016 at 2:34 am

    The question, “Is sight-word teaching justified?” is confounded by a few considerations worth noting:

    One: Some children enter school already having “sight words” in their repertoire–their name and other words they’ve learned by attending to food labels, street signs, and such. These words have not been learned by “phonics” and the fact that they have indeed been learned does not interfere with “phonics” instruction.

    Two. Many children DO learn to read when taught by “Whole Language” or “mixed methods.” How reliably they can read is a different matter, but many read “good enough” to get by.”

    Three. Children CAN reliably be taught “sight words,” and nearly all SSP programmes include such instruction. They restrict the instruction to “high frequency words” and focus the instruction on “tricky” word elements. However, from a littlie’s perspective, all words are “tricky” and recalling which letters are “tricky” and which are not is “trickier” than “just memorize it–for now,” which children WILL do irrespective of what they’re told to do; and because the words occur so frequently it’s usually “no problem”.

    Three. The “interference of sight word instruction with phonics instruction” is frequently observable only at a later time. Some students will overcome the “interference” with self-instruction. Others perennially confuse even short high-frequency function words such as pronouns and prepositions. Many students are only identified as “problems” in Yr 3-4 and beyond when they encounter texts involving a high proportion of unfamiliar words. Schools take credit for all students’ success and attribute all failures to the students and their parent, overlooking the instruction that has been provided.

    Other sectors encounter analogous confounding considerations and have worked out methodology to deal with them. Absent such methodology, we’ll continue to muddle.

  13. Kate Nation July 4, 2016 at 9:26 am

    Just a quick comment to note that alongside intervention studies, computational models hold promise as a means to compare the efficacy of different teaching methods. Zielger at al. implemented a computational version of the Share’s self-teaching hypothesis. Their approach was to start small with a set of GPCs. This provided rudimentary decoding skills, sufficient to either correctly read a word, or get close enough to activate a word unit in the phonological lexicon, allowing word-specific orthographic information to be learned. In turn, this feedback to the GPC knowledge base refined and extended phonological decoding skills, thus benefiting future decoding attempts (without additional instruction but via “self-teaching” form the reading experience). In short, each successful decoding event had two consequences: first, establishing a direct connection between a letter string and whole word phonology and second, improvements to the decoding system more generally. Overall, this captures the self-teaching hypothesis well.

    The paper is Open Access and can be downloaded here:

    http://rstb.royalsocietypublishing.org/content/369/1634/20120397

    I’m not a computational modeler, but wonder whether it might be possible to compare different training regimes (varying the number of GPCs in the starting set for example) and examining the consequences for orthographic learning.

    NB. Share’s self-teaching hypothesis can be downloaded here:

    http://www.sciencedirect.com/science/article/pii/0010027794006452

  14. Laura Shapiro July 5, 2016 at 11:05 am

    Thanks for a really interesting blog, Anne, and it is great to see so much discussion! There were a few questions relating to the details of the Shapiro and Solity (2015) study, so here are some clarifications to start with before I get to grips with the broader discussion (and I’m continuing to read the replies, so apologies if I’ve missed some qns). Of course I’m v happy to send reprints.

    Debbie Hepplewhite points out that Letters and Sounds does teach high frequency words. In our paper, we give more details on the way that high frequency words are taught in ERR vs. Letters and Sounds- the basic difference is that (as Debbie explains), children are encouraged to partially decode these words under L&S, whereas under ERR these are taught as whole-words. The consequence of this is that under ERR, children are given more practice recognising these words as wholes (in fact they get very frequent practice, although overall teaching time is similar), whereas under L&S, children get more practice in phonically decoding these words.

    Anne neatly summarises our main message (the two programmes were equally effective for the majority of children, with some evidence of an advantage of ERR for children with poor phonological awareness). It is important to stress that this was a quasi-experimental study, and although the approach to sight words is likely to be critical, there were other differences between the programmes.

    In terms of the implications – we’ve highlighted that the details of synthetic phonics programmes may critically affect their effectiveness for children at risk of reading difficulties. So I completely agree that we need more experimental work to test these details (and as Dorothy Bishop points out, this certainly calls for a randomised controlled study!).

  15. Janet Vousden July 5, 2016 at 1:27 pm

    Thanks Ann for starting this blog – some really great points raised here.

    It would be really instructive to have some data on the effectiveness of teaching phonics plus “something else for high frequency and/or irregular words”. I agree with Ann that although teaching some form of sight vocabulary is widespread, there is little research that questions how much of this “something else” we should teach, when should we introduce it, how we should teach it etc..

    In terms of computational modelling, these methods also provide a means of comparing the content of literacy schemes (e.g. as Kate suggests, we can vary the number of GPCs taught, or look at the effect of teaching different GPC sets). Although very theoretical is does provide an evidence based rationale for choosing which GPCs might be best to teach, and demonstrates how the law of diminishing returns might be a useful way of thinking about learning trajectories. The work I have done with colleagues in this respect makes a simple point – you will be able to decode more words if you consider the utility of the GPCs that you learn, and at some stage the effort of learning them will outweigh any gains in terms of reading more words.

    Some of the work I have done in this respect can be found here:

    https://www.researchgate.net/publication/235921878_Comparing_the_content_of_UK_reading_programs_using_the_Simplicity_Principle

    and here:

    https://www.researchgate.net/publication/50830414_Simplifying_Reading_Applying_the_Simplicity_Principle_to_Reading

    although of course the appropriate behvioural studies that emerge form this type of modelling still need to be done, and it’s great that this discussion has got going, particularly with respect to sight vocabulary/irregular words!

  16. Dick Schutz July 5, 2016 at 5:40 pm

    It would be really instructive to have some data on the effectiveness of teaching phonics plus “something else for high frequency and/or irregular words”.

    Amen, say we all. But when the requirement that the “data” must come from Randomized-Control Trials is added, hell is likely to freeze over before we get this information. Meanwhile, electronic tablets can already “read” and can translate from one language to another. The electronic devices will “soon” be programmed to reliably teach children how to read. This is already being explored for 1-to-1 Alphabetic Codes. But getting “electronic tablets” available to all primary school children in AU, UK, and US will be a daunting endeavor. So that’s a whole nother story.

    The question is what can be done NOW. My answer to the question; exploit the “good data” already available in the Natural Experiment in England. The “population database” here is so large that randomized samples of schools and classes can readily be drawn and replicated as many times as required to “verify the results.” Moreover, a fresh cohort of “subjects” enter school each year, naturally providing “fresh experimental participants” for further verification and for manipulation/testing of new Independent variables. In short, there is more than one way to operationalize Randomized Control.

    The “big shifts”required and the application of the “Simplicity Principle” test instruction rather than students and impinge on the researcher community rather than on the teacher community.

    Change, anyone? Actually, it’s not much of a change–in principle. But the gap between “in principle” and “in practice” is always problematic.

  17. Jonathan Solity July 6, 2016 at 4:01 pm

    My comments follow on from those of Laura and Janet and address the implications of teaching pupils to read more than an optimal number of phonically irregular words by sight and GPCs.

    Over recent years my colleagues and I have created a database of real books that potentially addresses some of the issues raised previously. We have entered every word in approximately 1200 books on to the database which has generated 1,044,651 word tokens. We have identified (i) the most frequently occurring words on the database and (ii) the GPCs required to read every word on the database. The 16 most frequently occurring words account for 28.59% of the word tokens. Where children are only taught the most frequently occurring phoneme for each grapheme, only four of these words are phonically regular and they account for just 6.95% of the word tokens. The 100 most frequently occurring words account for 53.26% of the words tokens. Only 45 words are phonically regular and they account for 18.62% of the word tokens on the database. Thus, after teaching the most frequently occurring phoneme for each grapheme, pupils would need to be taught 35 further GPCs to read the additional 55 high frequency words which would include teaching multiple phonemes for the majority of graphemes. Doing so, as I suggest below, potentially causes pupils considerable confusion.

    Our research indicates that the optimal number of letter combinations to teach is around 31. There are several important instructional implications of going beyond this number and teaching additional GPCs. One is that many of the additional GPCs are low frequency and account for few word types on the database. For example, ‘oor’ (O:r*) only appears in seven words. Similarly when multiple phonemes are taught for a single grapheme it is often the case that they are also low frequency with only one or two words accounting for the majority of word tokens in which they appear. For example ‘ch’ represents three phonemes (tS in chin, k in school; S in chef). There are 19 word types where ‘ch’ represents the phoneme ‘k’ which have generated 965 word tokens, the majority of which (85%) appear in just two words, ‘school’ (794) and ‘schools’ (27).

    A further implication concerns the number of possible phonically correct pronunciations that result from teaching multiple phonemes for graphemes. As well identifying all the GPCs required to read the 1,044,651 word tokens on the database, we have conducted analyses to determine the proportion of these words that could be decoded if pupils had mastered all the skills taught through a number of the most popular phonic programmes that are currently being used in schools. What we found was quite interesting. It transpires that the confidence that children can have that they have provided the correct pronunciation for the words they have decoded, decreases with the number of GPCs and multiple phonemes that they are taught. For example, with Letters & Sounds over 22% of the words that could be decoded would have an alternative, phonically plausible, pronunciation. With one synthetic phonic programme over 50% of the words that could be decoded would have an alternative pronunciation and with another programme this figure would be over 95%. When pupils are required to rely exclusively on their phonic skills when reading, they have no way of knowing which phonically plausible pronunciation is correct when graphemes represent two or more phonemes.

    We are fast approaching a point where analyses of children’s literature are helping to refine the high frequency words and GPCs that they can usefully be taught. The impact of this knowledge can then be determined through the appropriate experimental research. Our database enables us to identify the phonically irregular words and GPCs required to read every book on the database. This means that teachers can supplement the phonics programmes they use with real books and be confident that children will have appropriate opportunities to apply the phonic skills they are being taught to real books. We will then be able to research the impact of inviting pupils to apply their phonic knowledge to phonically decodable books or real books.

  18. Kate Nation July 6, 2016 at 4:08 pm

    This is a fascinating dataset Jonathan and I look forward to learning more about it. Thank you for posting.

  19. Debbie Hepplewhite July 6, 2016 at 5:58 pm

    The emphasis in this topic is on ‘reading’ – starting with what people understand regarding the terminology ‘sight words’ and including what might be a judicious mix of teaching the letter/s-sound correspondences of the alphabetic code along with high-frequency words found in children’s ‘real’ books.

    I suggest, however, that this is in danger of taking a somewhat limited look at the potential of phonics teaching. The question has been raised as to whether there is an optimal number of correspondences and specific words that could, or should, be taught and this is clearly addressed very mathematically, or scientifically, when considering the analysis conducted by Jonathan of the content of ‘real books’.

    Phonics teaching includes teaching the ‘knowledge’ of the English alphabetic code – that is, the letter/s-sound correspondences (and, reversibly, the sound-letter/s correspondences). It also includes teaching the phonics skills of decoding or synthesising (sounding out and blending) and, reversibly, of encoding – that is, oral segmenting the spoken word and allotting letters and letter groups to ‘spell’ the sounds for spelling purposes.

    John Walker points to both the decoding results and the spelling results of the particular approach underpinning a specific programme – Sounds-Write – which he would be happier to refer to as a ‘linguistic phonics’ programme rather than a systematic synthetic phonics programme – although the programme provides for both blending for reading and segmenting for spelling. The programme addresses both spelling and reading as should any high-quality phonics programme regardless of title.

    What is important in mentioning this is the issue of spelling. Whether more aptly labelled as a ‘systematic synthetic phonics’ programme or a ‘linguistic phonics’ programme, the point is that these programmes provide for both reading and spelling. Further, when there is plenty of material provided in the programmes consisting of cumulative banks of new words and cumulative texts for children to practise their decoding (and encoding), then children very rapidly become ‘readers’ and the vast majority of them can indeed start reading more widely still than the controlled, cumulative decodable texts which some prefer to call ‘real books’.

    What might be an interesting question for researchers is how many children, and how quickly, can soon self-teach and read more widely than the material, or phonics stage, in the phonics programmes.

    Further, just how many letter/s-sound correspondences (or sound-letter/s correspondences if people prefer to consider the code from sound to print), do teachers need to teach to not only get children up and reading, but also get children on the road to strong spelling?

    I think spelling is the weak relation here whilesoever emphasis is on reading acquisition and reading results. A very large emphasis in the programmes that I am associated with, for example, is very much on spelling – including the spelling of tricky words and the building up of knowledge of spelling word banks.

    Then, in order to have a view on ‘how many’ correspondences to teach to get children on the road to reading (as in capacity to read ‘real books’), the question should be raised as to whether minimising the formal teaching of the alphabetic code based on the argument that it is unnecessary to teach the alphabetic code comprehensively – but what about spelling? I repeat that when, for spelling purposes, there are over 44 units of sound (at the level of the phoneme) that can be identified through oral segmenting for spelling, and most sounds have multiple ways of spelling them, then around 60 correspondences is very minimal indeed – and what about spelling word bank work? 60 or so correspondences is not even two spelling alternatives for every sound.

    To leave so many correspondences not taught or not guaranteed to be taught is surely remiss in the scale of the challenges of spelling and reading in the English language. See here for one example of English alphabetic code – not definitive and not the whole code!
    http://alphabeticcodecharts.com/One_side_ACC_with_IPA_symbols.pdfTeachers in England, at the very least, are instructed, rightly, to teach the alphabetic code as a reversible code, and to teach simultaneously both the skills of decoding and encoding. Surely researchers should be investigating the relationship and outcomes from the variations in phonics spelling programmes moving forwards – and not just reading.

  20. Kate Nation July 6, 2016 at 8:39 pm

    Thank you Debbie for another thoughtful and interesting post. I agree completely that spelling is neglected, especially in the research literature. I did my PhD on early spelling development and I remember submitting my first ever paper for publication reporting some of my findings. One reviewer said that there was nothing of interest in spelling as reading is what matters. How wrong!! And pleased to say that the paper was published, Nation & Hulme, 1996, Journal of Experimental Child Psychology (and I’m still proud of it… some 20 years later, gulp!)

    There is though some fascinating basic research on spelling out there, albeit not enough — I’ll see what I can do about tempting someone to write blog on the topic to get us all talking.

    More generally, thank you to everyone who has commented here, on twitter, and on the associated blogs by John Walker and Alison @Spelfabet. It’s been fascinating. There’s much for us all to take on board and think about.

  21. Dick Schutz July 6, 2016 at 9:03 pm

    We have identified (i) the most frequently occurring words on the database and (ii) the GPCs required to read every word on the database.
    Is this information available, Jonathan? It would be of interest to compare your compilation with other compilations of “High Frequency Words” and “Rules of Grapheme/Phoneme Correspondence” that are available.

    When classifying words as “regular/irregular” a couple of considerations warrant attention.

    One, some of the “irregularities” are so small that they are “distinctions without a difference. That is, as long as the tokens are “close,” children can make the generalization.

    Two, the same principle applies to pronunciation. For sample “said” is considered “irregular/tricky” but if a child pronounces it /sayed/ or /sid/, that’s “close enough.” In fact, some dialects do “talk that way.” The same holds “is” and “was.” /isss/ and /wass/ are close enough.
    Same for “GOOD MOOD and GOON” that Max used as examples earlier,

    These considerations greatly reduce the complexity of the Alphabetic Code when applied to reading instruction, but interestingly they don’t do anything for spelling. “SP” orthodoxy that “spelling is the reverse of reading” actually works against children’s learning to make these adaptive generalizations.

    A different consideration warrants attention in both reading and spelling instruction. Kids are spending “a lot” of time txting. Not only does this involve abbreviated text that departs from the Alphabetic Code (as all abbreviations and contractions do), it involves emoticons and other symbols that are learned “by sight” rather than by “phonics>”

  22. Pat Stone July 7, 2016 at 12:47 am

    May I ask people to read Beverley Randell about writing children’s early reading books; about which words they *need* in order to have a real reading experience, with a book, that entails more than phonics blending practice. http://files.eric.ed.gov/fulltext/ED436740.pdf
    I use Randell’s books. I take *sight* words from them. I would not ever teach words from a discombobulated list. I suspect that schemes which focus on spelling are giving children words they *need* for writing what they want and need to say, which is why they learn them so easily / quickly. Why is this need and desire OK in spelling / writing, but not in reading?
    Teaching reading, there comes a point at which I can say, “Some words you know. Some words you can work out as you go. Some words you get to from the story so far. Now do these all at once.” And they do. It is valuable to them that I understand what they are doing. Subsequently, these three aspects of reading go up and down in myriad combinations, like the sound bars on a music system’s visual display. If I focus on only one of the three, whichever that might be, the child will try to please me and will ignore or get confused by the others. “Why is she telling me to do this when these other 2 strategies are suggesting themselves to me?”
    Please read Beverley Randell…
    And please don’t forget to always work closely with some children so that all your theories and statistics have some connection to their learning.

  23. Dick Schutz July 7, 2016 at 4:30 pm

    “There comes a time. . .”

    Ah, yes. When (and if) that time comes, children can handle the Alphabetic Code (per a Screening Check) and require no further instruction in reading per se. The question of terming some words “sight/tricky” and teaching to memorize them in part or whole is in play only before this “time comes,” particularly early on when children have been taught/learned to read and can handle only a very few Correspondences.

    Before “the time comes,” children can experience reading books of their own choosing, which you are encouraging, through “team/buddy” reading, where the person who “can read” pronounces the words the child hasn’t yet been taught how to handle. This technique avoids “sight word” instruction and is a means of extending the repertoire of what are termed “decodable” books or “real books.” It also gives the child practice in distinguishing words s/he has not yet been taught how to handle, thereby promoting self-teaching.

    • Pat Stone July 7, 2016 at 9:04 pm

      “There comes a time. . .”
      I wrote, “…there comes a point…” Perhaps your instinctive urge to predict overrode what is actually there? This is one of the things children and adults do all the time. If the error alters the meaning of the text, we go back and look again and fix it. Have we all been taught badly?
      When I say, “There comes a point”, it is because I am a teacher and I have brought that point about. Nothing random or arbitrary about it.
      “Handling a very few correspondences.” Someone can read but cannot handle enough correspondences? How many and which correspondences should they have?

      As for your second paragraph, I have no idea what you are talking about? What am I encouraging? Who are these team/buddies? What are words a child ‘has not yet been taught how to handle’? Do you mean it is helpful for children who can’t read to be able to share books with someone who can read? That is not news?

      I don’t trust all this talk of words. Children need to be able to read books. There is more to reading books than word word word. Whatever the words are, children need to be able to put them together and make meaning.

  24. Debbie Hepplewhite July 7, 2016 at 6:16 pm

    Hi Anne,

    I really like the expression ‘word-focused’ provision and there is a constant need for ‘word-focused’ work within phonics provision – but there is also a need for a focus on grouping words together that are spelt with the same letter/s-sound correspondences (and finding ways to ‘link’ them together) which I refer to as ‘spelling word bank’ work.

    In my way of approaching phonics, ‘spelling word banks’ are not about ‘word families’ which are more closely related to onset and rime type phonics. By this I mean, a word family tends to include words with the same endings such as ‘mop, top, shop, flop, stop’. A spelling word bank, however, would include words such as ‘coat, road, soak’ and so on – where endings may be very different.

    Park that particular point for now, however, and let’s focus on terminology.

    I notice that in Australia and in the USA and amongst academics generally, there seems to be a very big focus on ‘rules’ to explain the many spellings in the English alphabetic code. There is a move away from this strict notion of ‘rules’ in some phonics approaches, notably in programmes such as Sounds-Write and the Sound Reading System – programmes devised on the basis of Professor Diane McGuinness’s work. They describe themselves as ‘linguistic phonics’ programmes and they are very anti ‘rules’ per se. This explains the postings on your guest-blog posting by John Walker as all spelling is approached simply as another spelling alternative for the sounds – a totally ‘sound to print’, code-based approach rather than considering that various words ‘break the rules’.

    In great contrast, there is the approach whereby the most frequent spelling of all words is considered ‘regular’ and other spelling alternatives for the same sound are defined as ‘irregular’. This leads to a rather large group of ‘irregular’ letter/s-sound correspondences.

    I note that some people describe every letter/s-sound correspondence as a ‘rule’ rather than as simply a ‘correspondence’ or ‘link’ between letter/s and sounds.

    This inevitably leads to an approach whereby a lot of the phonics teaching is presented as ‘breaking the rules’ (of the ‘regular words’ definition) which can appear as a very non-positive way of approaching phonics and perhaps overly complicated. It leads to huge numbers of word being considered ‘irregular’ rather than just including ‘alternative spellings’ for the sounds. One question might be, does this make a difference in teaching and results and how much?

    This is such an interesting, and perhaps very important topic, because it is not just about the ‘when’ and ‘how’ and ‘how many’ of teaching sight words, it suggests that amongst the broad umbrella description of ‘systematic synthetic phonics’, we do have variations of what this looks like and of the underpinning rationale of the various, specific phonics programmes. Words considered ‘tricky’ or ‘irregular’ in one phonics programme may not be considered tricky or irregular at all in another programme!

    If anyone has read my comments on your post, and also John Walker’s comments, you will note that we are both drawing attention to the apparent lack of interest amongst researchers in looking more closely at the effects of specific phonics programmes and approaches.

    In order to move the debate along in England, researchers, practitioners and programme writers came together under the auspices of the Reading Reform Foundation and they were able to do this by identifying specific ‘teaching principles’ that they have ‘in common’.

    This was helped along by Prof Diane McGuinness doing the same thing by identifying a ‘prototype’ of programme/practice that she identified from research.

    One of the most important aspects to come out of the identification of a prototype – and this went on to develop into the ‘core criteria’ of phonics provision used by the government/s in England – is to identify what NOT to do, and not just WHAT to do.

    Understanding definitions is very important as can be seen by your blog posting and the responses to it!

    Perhaps we can’t really use the ‘rules-focused’ expression whilesoever there is no commonality, or no agreement, about the notion of the ‘rules’ even though these have been officially defined and accepted by academics in the research community.

    In other words, not only is there no common agreement and understanding about what is meant by ‘sight words’, there is also no common agreement in reality by what is meant by the ‘rules’ of phonics.

    There is also clearly no common agreement about ‘how many’ of the letter/s-sound correspondences it’s a good idea to teach for reading – and what about teaching the code for spelling?

    By the way, I’m sorry I didn’t copy and paste the link to my example alphabetic code chart very carefully on a previous posting, I’ll try again here so that there is a direct link:

    http://alphabeticcodecharts.com/One_side_ACC_with_IPA_symbols.pdf

    Warmest regards,

    Debbie

  25. Dick Schutz July 7, 2016 at 8:30 pm

    There are (at least) three different ways in which the term “rules” is used in dealing with reading and spelling instruction.

    One, Orthographic Rules. These morphemic rules are unnecessarily cumbersome for single syllable words in early reading instruction, but they are relevant for multi-syllable words in reading instruction and throughout spelling instruction. Peter Bryant and Terezinha Nunes have authored an informative PowePoint presentation on this matter that can be accessed by googling: Beyond Grapheme-Phoneme Correspondences.

    Two. Rules of Grapheme-Phoneme Correspondences. Linguists consider each of the Correspondences to be a Rule, but most psychologists and educationists don’t think this way, so they find the usage jarring or worse, since the Rules are often over-simply referred to as the Alphabetic Principle than the Alphabetic Code. The charts that Debbie has linked to are a convenient depiction of the Code.

    Three. Pedagogical (Phonics) Rules. These are the rules or Principles of Synthetic/Linguistic Phonics that Debbie refers to. The Principles were codified in the programme requirements for the “Matched Funding” of materials in England, but are relaxed in the National Curriculum. A test of these Rules will be provided when results of the Phonics Screening Check are analyzed at the school and class level.

    There is communication dissonance when only one of these three usages is considered “regular” and alternate usages/spellings are treated as “irregular.”

  26. Karina McLachlain July 11, 2016 at 7:03 am

    Your blog points to research which shows that the teaching of exception words by sight is effective. But how effective is it? The ‘Look Say’ method is more effective at teaching reading (both regular and exception words) than not teaching reading at all. However, it has been established as being the worst method of teaching reading when compared to all other possible methods of instruction. If teaching exception words as sight words (which is the look-say method) is compared to not teaching them at all and/or includes no comparison to alternative methods, then how useful are the results and conclusions from this type of research? All we can conclude is that teaching sight words is about as effective as look-say, which isn’t a great recommendation.

    There are other methods of teaching exception words than the limited suggestions of: writing, copying, repeated pronunciation, or sequential letter naming. All of these suggestions are whole word/whole language methods (i.e. lexical route methods) only. This limited list screams the belief that there is no place whatsoever for pointing out letter-sound correspondences in the teaching of exception words.

    When research was done to compare reading instruction methods: look-Say, multi-cueing, analytic phonics and synthetic phonics, the higher the phonics content, the more effective the method was found to be. This analysis should, in my opinion also be extended to the teaching of exception words on their own. The higher the phonics content in the teaching of them, the more effective the teaching. Exception words don’t have all regular one-to-one letter-sound correspondences. This would seem to rule out the sub-lexical route (synthetic phonics) for reading these words – OR DOES IT?

    Researchers Johnstone and Watson do not recommend teaching any words by sight. They suggest that when teaching exception words that both regular and irregular grapheme-phoneme correspondences are pointed out. It seems that with Johnstone and Watson’s advice, many exception words can be read via the sub-lexical route by teaching word-specific grapheme-phoneme correspondences. Many exception words only have one letter that does not follow the regular pattern. Wouldn’t it be more effective to teach that letter-sound correspondence (which can be generalised to several other exception words) than defaulting to teaching the whole word as irregular? Especially since patterns exist in the irregularity of English.

    In the word ‘want’ only one letter doesn’t follow regular letter-sound correspondences – the a, which takes an /o/ sound. However, this is not the only word where a makes the sound /o/. In most words that begin with wa and swa, the a takes the /o/ sound. By looking for patterns of irregularity and teaching words in groups that have the same pattern, e.g. want, was, watch, waddle, swap, swan, swab etc. we can make the process more efficient than teaching exception words on a word by word basis in isolation. There are many groups that share the same irregular grapheme that can be taught together e.g. would, should & could & bye, dye, rye, eye.

    Some techniques from analytic phonics can also be used to teach exception words. These include analysis of onset-rime. Many groups of exception words contain a regular onset and a common irregular rime e.g. some & come; all, ball, call, tall, wall, stall, small etc.

    An added ingredient that is missing from reading instruction and would help with the learning of exception words is the colourful history of English word origin and irregularity. Context is important in the learning of new concepts in all academic subjects, but is often ignored when teaching reading and spelling. When the teacher has a good knowledge of the history of the English language, this can be used to great effect with anchoring words and their spelling into long-term memory and their recall from memory. For instance, when I teach ‘kn’ as in knife, knight, knit etc. I tell the story of how those words came from Scandanavia and were brought to England by the vikings. I explain the k used to be pronounced and that words in Scandanvian languages that have this spelling still pronounce the k. The children are told that in English we stopped pronouncing the k, but left the spelling the same as in the olden days. I let the kids have fun by pronouncing all the words with this grapheme like vikings and recommend using this pronunciation when trying to recall its spelling. Similarly, I tell the children that the grapheme ‘igh’ now makes the long i sound (as in island), but used to make a different kind of gutteral sound (which I demonstrate for them and by so doing disgust them at the same time). In the middle ages English was more Germanic sounding than now. Children have fun pronouncing words in the old way and this becomes a memory hook for learning and recalling words with this spelling if the long ī sound.

    The irregular words which begin with wa and swa mostly had their origins from Germanic languages where the w was pronounced as /v/. When the /v/ sound changed to /w/, the a evolved to an /o/ sound as well. Even the irregular spelling of ‘yacht’ can be explained in the form of a story. When the first English dictionary was to be printed, it was sent to Holland for printing. This was the closest country with the printing press at the time. On the way over, the handwritten version of the dictionary contained the spelling ‘yott’. The dutch, who did not understand the English language very well, confused the ‘ach’ in Scottish words e.g. Lachlan with the /o/ in English words and inserted ‘ach’ instead of o into yacht and that is how we got stuck with that silly spelling. We also had additional letters in the English language, including eth and thorn (voiced and unvoiced th). As these letters were not included in the dutch language nor printing press, they had to be replaced by the digraph ‘th’. The stories of English irregularity are fascinating to adults and children alike and should not be underestimated as a teaching tool.
    Although spelling reform in the history of English found it difficult to achieve one-to-one correspondence, efforts were made to make it more obvious as to whether vowels were to be pronounced with the short sound (a apple, e, egg) or the long sound (a alien, e, evil). Knowledge of open and closed syllables is helpful for an understanding of how to arrive at the correct pronunciation. For instance, if you understand that open syllables end in a long vowel, then words and prefixes such as: he, she, we, me, be, so, no, go, co- no(tice), I, bi-, tri-, a(lien), ta(ble), u(nicorn) etc. are much more easily read and are therefore not exception words.

    In conclusion, the teaching of exception words can be a lot more sophisticated, efficient and interesting (and I hypothesise that research on comparative methods of teaching them would also establish as more effective) than the blunt instrument of teaching each irregular word on it’s own, out of context, as a whole and with no reference to phonics whatsoever.

    For academics and teachers with little knowledge of the history of the English language and how to teach it effectively, I recommend books: ‘The Stories of English’ and ‘Spell it Out’ by David Crystal, the video: https://www.youtube.com/watch?v=oNkiatQo-pQ by Alison Clarke of Spelfabet and the publications and blog posts of John Bald https://books.google.com.au/books/about/Using_Phonics_to_Teach_Reading_Spelling.html?id=RYB1P7UAGosC.

    For those teaching children with Surface Dyslexia, they might want to use (in addition to the methods of teaching exception words explained above) the multi-sensory techniques of Grace Fernald.

    Kind regards.
    Karina

    • Karina McLachlain July 12, 2016 at 7:41 am

      Thank you for your reply. I think that the point you make about the self-teaching mechanisms that children use to learn exception words is very valid. Children that have high phonological awareness will often discover and then generalise many letter-sound correspondences of their own accord. When it comes to exception words, they are likely to also notice the less well known grapheme-phoneme correspondences, such as the ‘oul’ in would, should and could. However, not all children, particularly those with low PA or dyslexia, are able to discover letter-sound correspondences for themselves and therefore self-teaching is far less likely to be happening with them. My concern and those of most academics should be how best to teach children with low PA and reduced natural ability to develop self-teaching mechanisms.

      Academics need to reexamine the evidence that shows that sight word teaching is counterproductive and at odds with the teaching of synthetic phonics. Methods of teaching exception words need to be more compatible with those used to teach regular word reading.
      Johnstone and Watson noted in the Clackmannanshire study that the children taught by synthetic phonics were better able to read exception words than those taught by analytic phonics. This is despite these children being taught SP had far fewer ‘sight’ words. This was in contrast to the AP groups which it was theorised should have been much better at reading exception words because their reading programme began with learning at least 50 high frequency and/or exception words by sight before phonics started to be introduced. High frequency and exception words continued to be taught as sight words during the programme. In spite of this, the AP groups scored significantly lower on reading exception words as well as reading per se. This is strong evidence that teaching ‘sight’ words does NOT help children to learn to read exception (or any other kinds of) words, but that phonics DOES. Those children in the SP group learned to read exception words so well because of they had more explicit teaching in grapheme-phoneme correspondences and this helped them to better develop a self-teaching mechanism which assisted in reading exception words with less common letter-sound correspondences.

      Some very important information was included in the full Clackmannanshire report that was not in the summary. It was noted that around middle primary a group of children who were making good progress in reading in P2 etc began to plateau and lose ground compared to their peers. Qualitative observation of their reading by their teachers noted that this group of children had stopped using decoding and blending as their first strategy to approaching the reading of words, their first reflex being to incorrectly guess at them in a ‘whole word’ manner. This reflex at guessing held even when the children knew all the letter-sound correspondences in the word and would have derived the correct pronunciation from decoding and blending. This group were again taken through the initial 16 week phonics programme that they had been given at the beginning of the study, after which their reading once again started making good progress. The information that was NOT in the report was if this group of children were made of those initially given the SP or one of AP treatments or a combination of both. I happened to meet one of the researchers at the Reading Reform Foundation Conference in the UK before returning to Australia and we discussed that very question. She said that all the children were drawn exclusively from the AP groups and no children given the SP treatment initially regressed in such a manner. It seems that sight word training can (especially with children with low PA) form an ingrained and resistant-to-intervention ‘whole word’ reflex to approaching reading. This is evidence that sight word training is counterproductive and even dangerous, especially for the lower skilled and dyslexic readers.

      Dyslexic readers naturally have more difficulty in decoding and thus are more likely to want to bypass their difficulty by guessing at words, since it is easier to do. If teachers teach whole words by sight, then they have been effectively shown and given permission to automatically read a word as if it is ‘picture’. When they they are unsure of the word or think they know it, they guess (99% of the time incorrectly) It is my experience too that children taught to read sight words develop terrible guessing reflexes which are very resistant to remediation. The guessing phenomena has also been confirmed in evidence from Seymour & Elder 1986 know noted that response time for reading words by children taught to read ‘sight words’ was unchanged despite the length of the word. Vocal reaction time in reading is not affected by variations in word length’. Children taught to read sight words do not attend to the entire and word and all letters, they just guess.

      In my experience when children with a ‘whole/sight word’ approach to reading incorrectly guess at words, their answer has some visual features in common, but will usually vary widely from the target word in pronunciation e.g. of/from. My experience is also confirmed by evidence from Seymour and Elder 1986: ‘Both taught and untaught words are sometimes confused with other words. Confusions are generally similar in length to the target and may share a letter or letter group or elements which are confusable with letters in the target, often without regard to position’. Both this study and one done by Gough, Juel and Griffith 1992 showed that teaching sight words interfered with the ability of many children to develop the alphabetic principle.

      With regard to exception words, I’m wondering how well surface dyslexics can take advantage of self-teaching methods, if at all. There is no evidence that I know of (correct me if I’m wrong) that developmental surface dyslexics learn to read exception words (or learn to read any kind of words) better in schools where whole language or ‘balanced literacy’ instruction is used in preference to synthetic phonics. This would suggest that sight word instruction is not particularly helpful.

      Whilst I have seen some journal articles which show that teaching sight words to surface dyslexics (in the absence of any other idea of how to help them) has been shown to be ‘effective’ (albeit sample sizes are minimal, no control group, no alternative treatment group), then how effective is intervention really? Of course teaching sight words to surface dyslexics is going to be more effective than not teaching words at all, but are there better ways that could be explored?

      I suggest that other methods for teaching exception words are investigated so that there is a clearer picture of how best to help children to read exception words, with morphology + the context of the history of the English language being considered as a possible treatment in future studies. With regard to surface dyslexia, I suggest the almost forgotten methods of Grace Fernald (Samuel Orton took some of her techniques and incorporated them into Orton Gillingham methods) be re-examined.

    • Karina McLachlain July 12, 2016 at 1:08 pm

      Dear Professor Professor Castles,
      I’m pretty sure that Rhona Stainthorpe had nothing to do with the Clackmannanshire report. That was Rhona Johnston. Most of Rhona Stainthorp’s research has been with precocious readers, with very high PA who can more of less teach themselves to read, in spite of how bad the pedagogy e.g. look-say. These kinds of children are not disadvantaged by sight word instruction in the same way that children with low PA or dyslexia are.
      As I have said previously, many academics and teachers, even leaders in the field of reading do recommend the teaching of exception words by sight. One of the reasons for this is because, even they, are not aware that any other ways of teaching these words exist. Perhaps if other ways of teaching exception words were brought to their attention and these methods were studied, then other better options may become available.
      Has it been proven by research that explicit phonics teaching increases the ability of children to be taught words by sight or improves the ability to read exception words? These are not the same thing, as Alison Clarke has pointed out, even though much of the research confuses the type of word with the type of instruction. It most definitely has been shown that phonics instruction improves exception word reading, even for words that are not directly covered in the instruction programme. This should be making researchers think about how they can expand phonics instruction to include more of the less familiar letter-sound combinations in an effort to improve exception word reading even more, rather than arguing that phonics should go only so far.

      • Debbie Hepplewhite July 12, 2016 at 3:26 pm

        Karina,

        You wrote:

        “This should be making researchers think about how they can expand phonics instruction to include more of the less familiar letter-sound combinations in an effort to improve exception word reading even more, rather than arguing that phonics should go only so far.”

        And I’m entirely with you on that suggestion. I have found it worrying that Jonathan Solity’s suggestions about teaching a diminished level of alphabetic code knowledge plus sight words is the way to go rather than teaching the complex English alphabetic code more comprehensively and giving attention to spelling improvement – not just a launch into reading acquisition.

        In fact, I suggest a thorough grounding in the alphabetic code such that my approach is described as the ‘Two-pronged systematic and incidental phonics teaching approach’ which introduces a comprehensive alphabetic code chart from the outset. This means not only are the children taught systematically and cumulatively from the beginning of the planned teaching programme, they are, in effect, given the full rationale for the alphabetic code (the bigger picture) and teaching is encouraged of any correspondence of the code, for reading and/or spelling purposes, at any time, which also addresses differentiation well and integrates phonics teaching and application into the wider curriculum.

        Perhaps researchers, then, might take an interest in more rigorous teaching of the alphabetic code, and not less!

        • Karina McLachlain July 18, 2016 at 10:22 am

          Dear Debbie,
          I have to agree with you 100%. I did meet you briefly at a Reading Reform Conference in the UK when I was still living there, although I am sure that you don’t remember me. It was the year that Nick Gibb was speaking.
          I am not familiar with your particular programme and from your description, it does sound very good, especially the emphasis on not teaching words by sight if at all avoidable.
          I believe that many people pushing the teaching of sight words are influenced by the ‘Dual Route Cascade Model’ of reading’ or should I say extrapolate from it and misapply it when giving prescriptions for teaching, particularly with regard to the lexical route. The dual route is a description of how (competent adults) approach reading individual words, not a prescription of how novices should be taught to read. With the sub-lexical route, it is pretty easy to identify that letter-sound correspondences are needed to read regular words and therefore should be taught to automaticity. When it comes to the lexical route, it is not as clear how regular and irregular words have come to be stored in long-term memory (the lexicon) as wholes. It is very easy for people to assume (without doing any actual research) to suggest that ‘exception words can only be read via the lexical route’ and therefore extrapolate to another assumption: ‘that exception words can only be taught as sight words so that they can be read via the lexical route’. These assumptions need to be attacked for their lack of scientific merit. Untested theory is dictating educational practices, not evidence.
          I believe that you may be acquainted with a good friend and mentor of mine called John Bald. Although I am a teacher trained in synthetic phonics and a dyslexia specialist teacher with AMBDA, I have to credit him with inspiring me to incorporate the irregularity of English spelling into my teaching practices and to introducing me to and demonstrating Grace Fernald’s methods. I don’t agree with every point that Mr Bald makes about teaching and we differ on the relevant importance of some points, but we both long ago noticed that there were some aspects of most synthetic phonics programmes that ignored the history of English spelling and spelling reform attempts and this made teaching irregularity difficult for these programmes. This comment is not directed towards yourself (as I haven’t examined your programme), but at SP programmes in general. But you may like to reflect on them, since I understand that you are one of the heads of the RRF.
          Historically, the RRF is perceived to have been highly influenced by Emeritus Professor Diane McGuinness who is apparently against any discussion of dyslexia whatsoever, and it appears that OG methods are also largely dismissed in the fallout. This is a shame as OG goes far further (although in my opinion not far enough) in showing many patterns of irregularity in English in a way that makes English more predictable and well understood.
          For instance there is a tendency in many SP programmes to try and fit each English word into tight one phoneme-one grapheme correspondence pattern where rabbit is r-a-bb-i-t. rather than showing at as two closed CVC syllables joined together (r-a-b/b-i-t). I have heard a member of the RFF say that teaching it this way would necessarily introduce a schwa in the middle making it it rab-er-bit. This is quite an unfair criticism since it is bad practice to teach consonants with schwa sounds attached. No professional I know does so and no child taught to read two syllable words with this method inserts the schwa between the syllables. Across SP instruction manuals and decodeable readers, I have seen two syllable word e.g. rabbit expressed usually without syllables r-a-bb-i-t, and when syllables are shown, there is wide variation e.g. ra/bbit, rabb/it and rab/bit. The notion of open and closed syllables was introduced into English spelling to give the reader a more predictable convention to show whether the long or short vowel is appropriate when attempting pronunciation. Thus the concept of adjacent consonants carrying information on how the vowel sound was produced was born. This type of liaison or relationship between vowels and adjacent consonant means that even phonetic English words can’t always be portrayed in a strict letter-sound correspondence pattern. Another instance where a letter carries no information, other than on how to pronounce another letter within the word is the ‘magic e’ which influences the preceding vowel and in so doing changes a closed CVC syllable with a short vowel into a CVCe syllable with a long vowel. Whilst it may be efficient to teach split graphemes (e.g. a-e, as in cake) for the long vowel sounds, I think children should be told why the ‘magic e’ was introduced and what purpose it serves. There are 5 kinds of syllable division patterns that are useful for children to know to negotiate reading both one syllable and multi-syllable words. The teaching of open syllables e.g. he, she, be and no, go etc. make these words no longer tricky or exception words, and means that words like table is read correctly with a long a and not a short a.
          One of John Bald’s messages to me when he was mentoring me in his methods, was not to try and ignore the irregularity of English as if it doesn’t exist (e.g. by trying to force every word, no matter how irregular into a pattern of one-letter-one-sound set of correspondences) with no explanation for why some patterns are more common than others. His message was to embrace the irregularity and employ it in your teaching methods. In observing John Bald instructing beginning students and those with long term literacy difficulties, I saw him entertain and inspire children to learn to read with the stories of how English came to develop both patterns that are considered regular and irregular and helping children make links between words with similar patterns of ir/regularity. Due to his knowledge of word origin, he can move struggling readers on eons, much to the delight of their parents who were previously pulling out their hair. Children remember the stories and then pass them onto their parents and other children. Many children struggling with literacy have poor memory for words and letter-sounds, but they don’t have a poor memory for these kind of stories that work as an aide-memoire.
          He does not charge parents to tutor and does not charge teachers who wish to learn from him by observing his lessons. He might be a valuable person to network with or have speak to the RRF on how to counteract the new interest in and push in favour of sight word teaching.
          http://www.johnbald.typepad.com/

  27. Maggie Downie July 12, 2016 at 1:23 pm

    I think that Karina has a very valid point in that the children many of us are most concerned about are those who struggle to learn to read. These are the children who are least likely to be able to ‘self teach’ and who are most likely to be confused by a mix of teaching methods, even if the method which deviates from consistent emphasis on decoding and blending only applies to relatively few words. To the practitioner on the ground it seems most logical to keep it very simple for these children and to avoid anything (such as use of letter names) which might lead to confusion even if research evidence to date doesn’t endorse this.

    I appreciate that, so far, research doesn’t cover this sort of detail and echo others who suggest that there is need for more. We know, from research evidence, how ‘most’ children can be taught to read but we don’t really know how best to help the strugglers beyond the fact that they learn better with systematic structured phonics instruction than with any other method. However, this still leaves a number who still struggle but who may benefit from attention to small, seemingly insignificant details such as exactly how ‘sight words’ are taught.

  28. Dick Schutz July 13, 2016 at 1:31 am

    Like Anne and others, I found the Solity and Shapiro article of 2009 a promising option. The full article can be accessed at:
    http://eprints.aston.ac.uk/20605/1/Solity.pdf

    With a bit more googling I found that Jonathon had managed to actually “get the research into schools”—not an easy accomplishment, as you can see in tracing the history beginning in September 2012
    http://mycouncil.oxford.gov.uk/mgIssueHistoryHome.aspx?IId=5323&PlanId=122
    The Oxford Challenge: Achieving World Class Teaching in Oxford City Schools serving Disadvantaged People

    The 400,000 BP project started off well. In September 2013, we read:
    http://www.oxfordtimes.co.uk/news/headlines/10768444.Kick_starting_children_s_enthusiasm_for_learning/
    Kick-starting children’s enthusiasm for learning

    But by February 2015, it was a different story:
    Education boss warns that Oxford children left ‘unable to read’ after schools project fails
    http://www.oxfordmail.co.uk/news/11779558.Education_boss_warns_that_Oxford_children_left____unable_to_read____after_schools_project_fails/>/a>

    All six schools participating in the initiative had dropped out by the end of the second year of the planned 3-year project.

    Jonathon’s “evaluation” of the project tells a different story:
    http://mycouncil.oxford.gov.uk/documents/s21678/KRM%20Report%20Academic%20Year%202013-2014%20Summary%20January%202015.pdf

    The bottom line is that we’re eight years older and Oxford City taxpayers are 400,000 BP poorer, but we’re back to square one.

    I found it curious that nowhere along with way was there any mention of the Year 1 Screening Check. A bit of looking at Oxford’s performance and at the instruction students are receiving would be a good start in “achieving world class teaching. Jonathon does note:

    70. From the perspective of KRM, effectively seeing phonics and reading as separate processes is a major concern. Unfortunately the schools implementing KRM Reading saw the two as separate, distinct processes which will have been a barrier to the successful use of real books in teaching reading. . . all four schools now separate the teaching of phonic skills from reading.

    Many other schools, teachers, and children throughout the English-speaking make the same separation.

    • Karina McLachlain July 13, 2016 at 2:41 am

      Shapiro’s research claiming that reading can be taught well with ‘real’ books is at odds with all the research that was gathered for the Rose Report (2006). The majority of such research shows that better reading progress is achieved in the early years when phonics instruction is accompanied by decodeable readers that matche the letter-sound correspondences that the children were learning (and didn’t expect them to read words which included letter-sound correspondences that they hadn’t been taught yet), thereby facilitating consolidation of the phonemes and graphemes, giving ample practise of the skills of decoding and blending and facilitating consolidation and fluency.

      Reading Recovery and PM levelled books are not ‘real’ books. They have been specifically designed to facilitate learning to read by memorising sight words and guessing at unknown words from pictures or context cues. This is why they are entirely unsuitable for children who have not yet mastered the code. These kinds of books are directly responsible for the kinds of counter-productive guessing habits that many poorer readers are afflicted with, even if there has been phonics teaching at their school. If Solity is promoting RR levelled readers, then the disconnection between what is seen as ‘phonics’ and ‘reading’ at the 6 schools where his scheme failed is a result of his own making, regardless of whether he chooses to accept responsibility for it or not. RR levelled books are not at all compatible with lessons in phonics or phonological awareness.

      This situation reminds me of a very successful programme in the United Kingdom called Springboard for Children. This is a programme whereby volunteers (and some paid specialist dyslexia teachers) go into schools and work once/week with children who have developed literacy difficulties. The programme consisted of structured one-to-one phonics lessons (Orton-Gillingham style) accompanied by decodeable readers e.g. Dandelion Readers. Unfortunately, when I was still living there, the board (who were mostly made up of community representatives and non literacy specialists) made the mistake of appointing a new Chief Executive who was a Reading Recovery teacher. She immediately changed the programme to eliminate decodeable readers and replace them with RR levelled books and other sweeping changes in line with her whole language philosophy on reading, thereby making the programme less effective (I saw it happen in a school I was working at in South London) and alienating the majority of very experienced tutors, alot of whom left and took horror stories with them. I believe a new CE is now in charge since 2013 and I hope that things are improving for this very valuable programme.

  29. Laura Shapiro July 13, 2016 at 10:33 am

    It’s great to see that this important discussion continues, and to hear the perspectives of those on the front line. It’s clear that we haven’t reached a consensus yet (!) and as a researcher, I’m trying to pick up on the main empirical questions. My sense from the discussion is that there is considerable variability in practice, even within programmes with a strong focus on phonics. These details are very likely to affect the children we’re most concerned about (picking up on Maggie Downie’s point). There is clearly lots of evidence out there, but interpreting the balance of evidence is tough without systematic experimental comparisons. Some of us are keen to follow this up experimentally, and would appreciate input on what the most critical issues are in practice. Here are some of the main controversies I’ve picked up from the discussion (please add to these & also highlight which are most critical!):
    -how to teach high frequency words (a relative emphasis on decoding, or on “whole-word” recognition?)
    – the number of GPCs taught (the narrow range supported by computational modelling work- see Janet Vousden’s post, or to maximise the transparency of the code for reading and spelling – as recommended by Debbie Hepplewhite)
    -reading materials used (decodable texts? Or real books?)
    -whether a mixture of methods is confusing, or beneficial, for poor readers
    – how to support spelling as well as reading development (how does this affect our decisions on high-frequency word-method and number of GPCs)
    And just to pick up on a few further points:
    I agree that Stuart & Stainthorp’s book provides a really nice synthesis of the body of work we’re discussing, and is written with practitioners in mind:
    https://uk.sagepub.com/en-gb/eur/reading-development-and-teaching/book237999
    And please note that the “real books” work wasn’t done by me, I think this is the main paper- by Solity & Vousden (2009):
    http://www.tandfonline.com/doi/abs/10.1080/01443410903103657?journalCode=cedp20

  30. Kate Nation July 13, 2016 at 11:17 am

    An excellent synthesis of outstanding questions Laura. I’m pleased that Anne’s post has stimulated an active discussion and allowed points to be aired that we can all learn from. I would like research to begin to identify more clearly what children actually learn from different teaching approaches, and how this generalises to other words. For example, a ‘sight-word’ approach for a particular word might be taught with one thing in the teacher’s mind, but what kids actually do and learn, and how this extends (generalises) to other words might reflect different processes in the child’s mind.

    I also agree that Stuart & Stainthorp’s book provides a nice synthesis. I reviewed it in an earlier post here:

    http://readoxford.org/book-review-reading-development-and-teaching

    This did result is some negative comments on twitter. For sure, there are things we might disagree with in parts of the book — that’s inevitable. But overall, it provides a clear and informed overview. I hope it makes its way onto reading lists for trainee teachers.

  31. Karina McLachlain July 14, 2016 at 2:22 pm

    I apologise, the above passage was written with an ipad on a bumpy bus travelling across rural NSW. The tone came across brusque and there are many proof reading errors which make the passage difficult to understand.

    I have some suggestions for Dr Shapiro. The think that it is that it is premature to be asking questions like: ‘What is the best way of teaching sight words?’ and ‘How many sight words should be taught?’ These questions make assumptions that exception words can only be taught as sight words and/or that this method has been established as the best way of teaching them. These assumptions are untested hypotheses. Although there are many arguments that exception words need to be taught as sight words and/or it is beneficial to teach a bank of high frequency words, (even if they are not exception words), where is the evidence to back these hypotheses?

    In order to settle these questions, a study would need to be set up which compared children learning to read exception/HF words by sight, with other methods of teaching them, such as grouping exception words with similar patterns of irregularity into groups based upon word origin and teaching the less regular phoneme-grapheme correspondences. When particular words are one-offs e.g. women (I don’t know of any other words off the top of my head that use the letter o to represent /i/, but correct me if I’m wrong), there are 2 approaches to teaching that word that could be compared. 1) teach it as a sight word; 2) point out the letter o and teach the children that it makes an /i/ sound in this word (and that all the other letters in the word are regular), as Johnstone and Watson recommend.

    I’m going to attempt to put the case that teaching every single exception word on its own as a sight word (when there are literally thousands of them) is inefficient, whereas grouping them and providing students with the context of word origin means that more exception words can be tackled in less time. Exception words usually better remembered (make it into the lexicon) when taught in story context and in association with their word families.

    Let’s examine 3 types of irregularity that derived from French. When introducing words of French origin, students can be told some of the exciting events about William the Conqueror and the Norman French invading in 1066, how they brought a lot of new vocabulary and also changed the pronunciation of English to sound less Germanic and gutteral etc.

    Take for instance words with the grapheme ‘ou’ that makes the /uː/ sound, as in ‘group’. These words are principally derived from French and include: group, soup, routine, crouton, coup (d’etat), toupee, troupe, route, croupier, louver, acoustics, bouffant, boulevard, boutique, camouflage, caribou, carousel, cougar, coupon, courier, couscous, uncouth, tour, detour, entourage, douche, mousse, moustache. I call this list the ‘French ou (/uː/) list’. Although probably not from French origin (and this I advise the children of), I also include the exception words ‘you, youth, wound (injury) and ghoul’ in the list, since they share the same spelling pattern. To further flesh out the story of French word origin, I sometimes give examples of other words with the same spelling that came from French, but where the pronunciation has been anglicised, such as courage, cousin, double, mountain, couple etc. and I pronounce those words in French to compare with the current English pronunciation (or have the children predict the pronunciation in French from the written form – this can be used to check that the children have learnt the convention and can apply it). In association with this list, I also teach the exception words ‘move’, ‘movement’ and ‘movie’ (moving picture). These words were derived from the French ‘mouvement’. Somehow the u got dropped, but we retained the pronunciation of the French /uː/. This information has helped some of my students, who until being told that story, persisted in spelling the words ‘moove’ and ‘moovie’.

    Another group of exception words include those ending in ‘tine’ pronounced ‘teen’. These words also originate from French and include routine, quarantine, philistine, guillotine, libertine, gelatine, pristine, nicotine, creatine etc. Even though machine doesn’t have a ‘t’, I also point out that the ending is similar. To flesh out the story, I often point out words that came from French, but whose pronunciation has been anglicised such as Valentine, clandestine etc. and pronounce them
    in the way that they are still pronounced in French (or ask the children to predict how they would be pronounced in French from their written form). In Australia, teachers can even discuss modern Australian products such as Ovaltine and Tontine which follow the same convention on spelling and pronunciation as the rest of the list.

    A third irregularity in English derived from French includes the ‘ch’ being pronounced as ‘sh’. These words include: machine, chassis, brochure, champagne, chandelier, chaperone, charlatan, chauffer, chauvinist, chef, chiffon, chivalry, cliche, crochet, machete, moustache, parachute, quiche, richochet, chalet, pistachio, niche and of course the names Michel/le, Cheryl, Charlotte, Chicago, Michigan etc.

    As you can see, there are altogether about 66 exception words above, but they have been organised into three groups under one larger banner of ‘words of French origin’. Even if only 10 words on each list are taught, the children have been shown three different irregularities that they can generalise to other exception words of French origin sharing the same spelling pattern.

    Many words appearing on the list of 66 had only one irregular grapheme-phoneme correspondence e.g. chef, nicotine, detour etc. Some words appeared on more than one list e.g. routine, machine, moustache. The more irregularity you teach and the more students knowledge accumulates, the more competent they become to tackle any word thrown at them.

    Another question for discussion- at some point in the teaching-learning cycle, do exception words stop being exception words? The definition of an exception word is one that ‘can’t be sounded out’. Is the word ‘chef’ only an exception word until the French ch/sh is taught? After it is taught and the child can confidently decode the word, does it then become a regular word? Is word ‘routine’ an exception word if the student is not aware of either the French /uː/ or the French way of pronouncing ‘tine’ like ‘teen’. Once they have both been taught, and the student can decode the word, is it an exception word to this student any longer?

    Teaching the irregularity of English and making connections means that children are more likely to be able to make sense out of a language that, without this historical information, doesn’t initially appear to make any sense. Lacking the ability to be able to make any sense of English has been a big cause of anxiety for many of the dyslexic children that I have worked with. They appreciate being given logical reasons for how the language came to be the way it is and to have their questions answered with more than ‘That’s just the way it is’.

    Karina

  32. Dick Schutz July 14, 2016 at 8:14 pm

    The thing is, English is NOT an “irregular” language and the English Alphabetic Code contains no “irregular” or “exception” Correspondences. All sighted children sense text by “sight,” and Dehaene and colleagues have found that even blind individuals who sense text by “feel” use the same brain network patterns that sighted individuals use. The terms, “sight words, exception words, and tricky words” are all misnomers, and although as Laura has demonstrated, “researchable questions” can be generated involving the terms, the inquiry is inherently doomed to yield nothing more useful than “a grant to support the research” and one or more “publishable articles in peer-reviewed journals.” Grants and articles are coins of the realm in academia, and there is nothing wrong with that, but they don’t buy anything much other than “more research.”

    Anne, Kate, and Laura have generously and courteously “reached out,” but “practitioners” are in the same position as students are. If they knew what the “critical issues” are, they would have resolved them. Providing Laura “the input she asks for” really “isn’t in the cards.”

    Each of the questions Laura raises is complex when you move beyond the question as it is posed. It would be feasible to go through the questions one-by one to untangle the complexity. However, It’s also feasible to short-circuit that interchange and cut directly to the chase. The best “breadboard” for resolving the questions Laura raises and other matters “critical” to reading instruction is the database of Alphabetic Code [Phonics] Screening Check results that has been collected in England since 2012. “Researchers” have ignored the database–because it’s not an RCT; and “practitioners” have largely opposed the data collection–because it “doesn’t tell teachers anything that they already know.”

    From a research perspective, the database provides the foundation for a natural experiment. It’s attractive because it allows the researcher to “get into schools” without the hassle Johnathon experienced in Oxford, and similar scars others of us have acquired. It’s also attractive because the inquiry results are directly implementable by “practitioners”–eliminating the “research” and “practice” gulf.

  33. Laura Shapiro July 14, 2016 at 8:44 pm

    Many thanks for your suggestions, Karina. I agree that defining exception words is difficult, especially when there is variation in the GPCs taught, and in the order in which they are taught (e.g., in the Letters and Sounds programme, words initially classed as “tricky” become decodable as children are taught more mappings). In practice, it’s clearer to contrast words that are fully decodable, given a child’s existing phonic-knowledge vs. words that are only partly decodable.
    However, I don’t recognise the qns “how many sight words…” or “how should sight words be taught…”- these aren’t from Anne’s original post? The qn posed is whether we should teach some words “by sight” in addition to phonics. Actually, the study you suggest- a systematic comparison of different ways of teaching high frequency words- would be a good way to tackle this questions.
    I’d like to find out more about the main alternative methods for teaching these words (in addition to the comparison we already made between Letters and Sounds vs. ERR in Shapiro & Solity, 2015). It’s very interesting to hear that you teach your dyslexic students about word origins & great that they find this inspiring.
    You implied that Anne, or others commenting on here, were proposing to teach all exception words by sight! When in fact, Anne’s original blog post (Anne specifically discusses the issue of generalizability in her original blog post). Jonathan Solity’s programme teaches 100 high frequency words by sight, for example.
    Dick- I’ve just seen your follow up comment. You’re very disparaging about research in general! I think some of your concerns are addressed by us all using more precise terms (see above), but I think the general point is that you feel that we (as researchers) aren’t getting close enough to practice? I think finding out about the range of approaches used (as this thread is helping with) does get us closer to some testable questions. For example, my sense in the UK is that there are many different phonics programmes out there for schools to choose from, all following “systematic synthetic phonics” but varying in details (such as the qns I mention above) and cost (!) but we’re short of systematic comparisons. I agree existing data is a useful way of investigating this (I very much doubt researchers are ignoring this database). But RCTs would provide the strongest evidence of the effectiveness of different programmes, and is the standard that we should be aiming for in Education.

    • Karina McLachlain July 15, 2016 at 3:09 am

      Hi Laura, my comments weren’t just about Anne in particular but the whole debate about ‘sight word’ teaching. However, these specific questions were asked by Anne in her blog:

      Which methods of sight word teaching – writing, copying, repeated pronunciation, or sequential letter naming – are most effective?
      What is the optimal number of sight words to teach at different points in reading acquisition, and with what intensity?

      The assumption that sight word teaching is necessary underlies these questions. I believe these questions are premature when there is no evidence that exception or high frequency words need to taught by sight or that this type of teaching is actually beneficial above the benefit that this method has compared to not teaching them at all. Sight word teaching in my experience, especially with lower readers discourages them from decoding and encourages guessing. I believe the more words are taught by sight, the counterproductive their force on reading progress.
      Karina

  34. Dick Schutz July 14, 2016 at 10:58 pm

    Ooops, I regret is that I came across as “disparaging about research in general.” That was just the reverse of my intention. (I happen to be a long-time, card carrying researcher myself and the founding editor of the US journal, “Educational Researcher,” but that’s beside the point.)

    The thing is, despite this helpful colloquy, (which has gone on a lot longer than most “comments to a blog”) we’re still NOT all using precise terms, and even if those participating in the comments were to come to an agreement on terminology usage, the rest of EdLand would still be talking Educationese.

    Much of the dissonance can be resolved without any empirical inquiry necessary. But the analysis necessary requires delving into gory detail, which can be done but seldom does get done. For example, we blithley say that some words are “partially decodable.” This is akin to saying that some women are “partially pregnant.” All words are decodable, if you know how to decode the word. And a word is no more partially decodable than it is “partially pronounceable.”

    True, we can instruct teachers to teach children to pronounce the correspondences in a word that
    that they have already learned how to handle. This requires children to
    –distinguish two classes of words– “garden variety words” and “exception/irregular/tricky” words
    –distinguish the elements in the word that they’ve been taught how to handle as paired associates
    –memorize the remaining correspondences

    True, one can run an RCT comparing this protocol with other protocols. But getting the findings embedded into instruction is the “gulf” I spoke of.

    When you tote up the number of RCT experiments and the time it will take to run them to clarify the “approaches” that are “out there,” aiming for this seems to me a holy grail quest. The natural experiment in progress provides a sufficiently large base to draw random samples ad infinitum to test the reliability of any “systematic comparisons” that are generated in the inquiry.

    Incidentally, if there are researchers who are exploring the PSC database, other than the select few who were following remits of the DfE, I don’t know of them.

    Also incidentally, I don’t know of any programmes that have been revised on the basis of the data–or on the basis of any other data, for that matter.

    “More research is needed.” I certainly adhere to that scientific credo. But “smarter research” than we’ve grown accustomed to doing is also feasible and warrants consideration.
    L&S has been

  35. Jonathan Solity July 20, 2016 at 4:33 am

    Our Oxford study, mentioned by Dick Shutz, highlights the tensions that exist when positive research outcomes and evidence clash with a narrow political agenda! The document posted by Dick Shutz is a summary version of a 90 page, 30,000 word report that we submitted to the City that provides a detailed analysis of the origins, outcomes and implications of the research conducted in Oxford City. In complete contrast to the newspaper reports in the Oxford Mail, the Optima Programmes (formerly KRM Programmes) in reading and maths had a dramatic and significant impact on students’ attainments. Paragraphs 11-41 of the summary report summarise the key outcomes in reading at Key Stages 1 and 2, with perhaps the most significant being that lower attaining pupils in the experimental group made better progress or equivalent progress to the higher achievers in the comparison groups. The lower attaining pupils, who were taught to read through an optimal number of synthetic phonic skills and real books, on a whole class basis, made far greater progress than middle or lower achieving pupils in the comparison groups who were taught synthetic phonics through decodable texts on a small group and individual basis.

    The results we achieved, in one of the lowest attaining catchment areas in the country, were against a background of considerable turbulence in the schools. For example, (i) there were 21 changes in headteacher during the two years the project ran as well as considerable teacher mobility. Thus, within a very short time, the headteachers that opted into the project were no longer in post; (ii) four schools became part of academy chains; (iii) the Optima Reading programme is totally inclusive with all pupils being taught on a differentiated, whole class basis by their regular class teachers. Many headteachers preferred the more traditional system of withdrawing lower achieving pupils so they could be taught by TAs in small groups or on a one to one basis and (iv) the nature of the project required the schools to maintain programme fidelity which meant that not only were there no withdrawal groups but that numerous teaching methods favoured by teachers, such as guided reading and teaching multiple phonemes for graphemes, were not compatible with the methodologies used within Optima Reading.

    So why did the Oxford Mail report the views of Melinda Tilley without qualification or reference to the report we submitted to Oxford City Council, when the evidence presented was so positive and could potentially have had a significant impact on the attainments of many low attaining pupils, as well as save the council a considerable amount of money? The answer lies in reading between the lines. Melinda Tilley was responsible for launching an alternative programme within Oxfordshire (which includes schools in Oxford City) which, coincidentally, was backed by the Oxford Mail. The results in Oxford City were inconvenient and didn’t reflect the outcomes that Melinda Tilly had hoped for. However, she didn’t let the evidence get in the way of a good story so her ill informed and inaccurate comments were reported in full. This included the unbelievable line, “The reason it [the scheme] failed was because it wasn’t purely reading, it was based on lots of psychometric testing which causes kids to lose interest.” There was of course no psychometric testing, only regular formative assessments of pupils progress as well as pre and post intervention measures of pupils’ reading. As far as the costs were concerned, a considerable chunk of the funding we received was spent on providing 1400 real books (200 per year group for pupils in Years R-6) for all the schools originally participating in the project so that pupils could, in theory, read a different book every day during their time in primary school. The funding was also spent on (i) consultants who visited the schools every four weeks to ensure programme fidelity, (ii) pre and post intervention assessments of approximately 1000 pupils and (iii) data entry and analysis. It should also be noted that Paragraphs 42 to 44 of the summary report indicate that teachers were overwhelmingly positive about the Optima Reading programme.
    The Oxford project raises a number of key issues for researchers and teachers which include how best to proceed when research outcomes clash with, or contradict, a political agenda, conventional wisdom or teachers’ preferred teaching methodologies. It’s unlikely that Melinda Tilly would have been so dismissive of medical advice into what research shows is the most effective way of treating any given illness in favour of her preferred remedies. Ben Goldacre wrote a paper for the DfE which argued for an increased role for randomised controlled trials in the field of education (‘Building Evidence into Education’) (https://www.gov.uk/government/news/building-evidence-into-education). Perhaps if the educational community (including government ministers) had been more receptive to evidence informing classroom practice, some the research questions raised by Laura would have been investigated and resolved many years ago.

    In this context it is worth recalling that the preliminary report of the Literacy Task Force, chaired by Michael Barber in 1997 (http://www.leeds.ac.uk/educol/documents/000000153.htm), noted:
    There have been few more vigorous educational controversies in the last decade than the one over how reading should be taught. Opposing sides in a vigorous national debate took to the barricades with banners proclaiming their loyalty to “phonics” or “real books”. But while this debate has raged, research and the understanding of “best practice” have moved on. We now know a great deal about the best technologies for the teaching of reading and that they include a recognition of the critical importance of phonics in the early years. The chief strategic task is to ensure that primary teachers and schools are well-informed about best practice and have the skills to act upon it (Paragraph 43).

    The report then recommended that the National Literacy Project, the pilot version of the National Literacy Strategy be evaluated experimentally:

    The Literacy Strategy Group should commission an independent evaluation of the NLP as soon as possible after an election. It should involve comparison of the participating schools with a control group. It should be undertaken with the specific goal of showing how the model could be refined and built upon to form the basis of a national approach to the teaching of literacy in primary schools.

    The Bullock Report of 1975 noted the very same tensions in the teaching of reading just as the Rose Review did almost 30 years later. The Rose Review talked about the importance of teaching synthetic phonics systematically and consistently but did not enter the commercial arena by recommending any particular published programme. This was left to the coalition Government of 2010-2015 who approved various programmes, which with one exception, have never been researched experimentally. Although the Rose Review did recommend that pupils be taught synthetic phonics it also noted in paragraph 84 the potential of real books in teaching reading:

    “There is no doubt, too, that the simple text in some recognised favourite children’s books can fulfil much the same function as that of decodable books. Thus it may be possible to use these texts in parallel, or in place of them. In any event the use of decodable books should certainly not deny children access to favourite books and stories at any stage and particularly at the point when they need to read avidly to hone their skills, as the focus shifts from learning to read to reading to learn. Current work being undertaken at Warwick University valuably explores these matters, suggesting, for example, that:

    ‘many books written for young children have a high degree of repetition anyway, above and beyond high frequency words. Furthermore, the vast choice of available books will potentially contribute to them developing and extending their vocabularies and general knowledge.'”

    The present Government has determined that children are taught to read through methods that are largely rhetoric rather than evidence based. There is interesting research currently underway that potentially questions much current practice, some of which has been discussed on this website. For example, research suggests that too much phonics is just as bad as too little and that beyond teaching an optimal number of synthetic phonics skills, it is children’s language and vocabulary knowledge that facilitates their decoding skills rather than teaching an ever-increasing number of GPCs. It would also be interesting to have a clearer understanding of the impact on children’s reading and comprehension skills (particularly with pupils with limited language skills) of intensive training in non-word reading in preparation for the Year 1 Phonics Screening Check. Equally it would be valuable to research the relative impact of teaching synthetic phonics through decodable texts and real books, particularly to such students with limited language skills on school entry.

    In the absence of the relevant research perhaps teachers and researchers should conclude that when children fail to learn as intended the problems lies with the instructional programme rather than the children. At the very least this would ensure that the instructional methods through which pupils are taught then become the focus of future research in an attempt to identify what Margaret Snowling and Charles Hulme described in their 2011 paper (Evidence-Based Interventions for Reading and Language Difficulties: Creating a Virtuous Circle) as ‘well-founded interventions.’

  36. Karina McLachlain July 20, 2016 at 1:17 pm

    Dear Mr Solity,

    I am sorry if your programme was unjustly slandered in the media. However, SATs results (which are publicly available) in schools across Cambridge could be a source of evidence to settle which of the programmes run in Cambridge have brought the best bang for Cambridge’s buck.

    I do have a few questions and/or challenges about points you have raised in your post above.

    Firstly, you state that research suggests ‘too much phonics is as bad as too little’. You haven’t quoted any references to back up this claim. If such research evidence does exist, I would suggest that any results of this nature would have been based upon either a) a poor programme of teaching of phonics that was not sufficiently structured and cumulative in nature; b) involved teachers who were not well trained; or c) that the results only applied to the higher achieving students with high phonological awareness and a strong self-teaching mechanism that allowed them to deduce GPCs of their own accord.

    Secondly, you mentioned that ‘teaching multiple phonemes for graphemes’ is contrary to the fidelity of your programme. Considering that there are many graphemes in English that represent more than one phoneme (for example hard and soft g & c, ch, the graphemes a, e, i, o, u (that can represent both long & short vowels), ea, oo, y (which represents a consonant sound as well as several different vowel sounds), ow etc.) how are children supposed to accurately read words with the phonemes that haven’t been taught? A good phonics programme would not of course attempt to teach all forms of spelling for a phoneme nor all phonemes represented by a grapheme at the same time. A good phonics programme begins with easier & more common graphemes/phonemes and progresses onto less common and more difficult ones. Without teaching all major GPCs (if I am interpreting your above comment correctly), Optima programme sounds rather incomplete.

    Thirdly, by ‘real’ books, are you referring to books which have been levelled according to Reading Recovery criteria e.g. PM readers, Oxford Reading Tree (Chip and Biff) or any of the other reading schemes that were common during the Searchlights era? If so, then you, like many members of the public, have been influenced by decades of Reading Recovery (and other whole language proponents) propaganda that claim that their books are real and constructed ‘naturally’.

    I will repeat a short section from a letter that I wrote to the LDA Bulletin that referred to so-called ‘real’ books:
    …………………………………………………………………………………………………………………………………………………
    RR’s hegemony spreads far and wide and they even have control over children’s book publishers. Look at how all the children’s books for teaching reading are labelled with RR book levels and all reading assessment in schools is done with PM Benchmarking [this may no longer be the case in the UK, but is still current practice in Australia], also based on RR book levels. RR criteria for levelling books is vague (Hiebert 2002) and consistent with whole word instructional methods. A better system of book levelling has to be developed that includes factors such as decode-ability,
    the ratio of monosyllabic to polysyllabic words and the proportion of literal versus inferential comprehension required to read it i.e., factors that can be defined and measured.
    Early RR/PM readers repeat words over and over so that children will learn them by sight instead of sounding them out. Authors are instructed by publishers to write books in this way. When I did a creative writing course one summer for would-be authors of children’s books, I was given an acceptable word list corresponding to [each of the] RR reading levels and told to construct text with these words. After doing this course, I read an article about Dr Seuss who complained that he had to write books with these kind of word list restrictions set by publishers, rather than phonically (Arizona Magazine , June 1981). It makes me wonder to what extent ‘high frequency’ words are an artificial construction by the publishing industry.
    …………………………………………………………………………………………………………………………………………………….
    The point being that these types of books are not ‘real’ books and they work at cross purposes to teaching reading with synthetic phonics. You may want to look into the criteria behind RR levelling and evaluate whether it is really compatible with teaching beginner readers.
    Hiebert, E. (2002). Standards, Assessments, and Text Difficulty. In A. E. Farstrup, and S. Samuels (Eds.), What Research Has to Say About Reading Instruction (Third Edition) (pp.337-369). Newark, DE: International Reading Association.
    In these type of repetitive texts, it is difficult to argue or prove that the children are being exposed to a wider variety of vocabulary than in phonics readers. Even if they are exposed to more and newer words, will they be able to actually read them if they lack sufficient GPCs to decode them?

    Fourthly, the Rose Report presents much research that supports the superior outcomes for SP reading instruction with decodeable texts over so-called ‘real’ or RR levelled books. These findings contradict the results from your single study. Whilst you quote a passage from the Rose Report that suggests that real books can be used to supplement decodeable readers, this passage is specifically referring to more experienced and competent readers who are moving to the ‘reading to learn’ phase of their reading development, not beginner readers. Evidence from across the pond also strongly backs up evidence of the value of decodeable readers in early reading instruction. In a seminar on explicit instruction by Anita Archer in Australia last year, she reported on the results of extensive research carried out in the USA that showed that levelled books had relatively little benefit in facilitating reading progress until around the 3rd grade (Year 3), when children were making the transition from ‘learning to read’ to ‘reading to learn’. I am happy to provide the notes and references of this seminar if you would like them.

    The Rose Report certainly does not ignore the issue of vocabulary development and suggests that children should be exposed to a wide variety of stories and vocabulary. Ways to do this include oral story telling (a long tradition) and reading stories to children. Although young children may lack the GPCs to read books like Harry Potter (or whatever the latest rage is), it certainly doesn’t mean that teachers and parents can’t read and discuss these kinds of stories with children, simultaneously developing their vocabulary and oral comprehension skills. Children also have the freedom to choose their own library books, but that doesn’t necessarily mean their choices are appropriate for reading lessons.

    Finally, this statement doesn’t make any sense and defies logic: ‘it is children’s language and vocabulary knowledge that facilitates their decoding skills rather than teaching an ever-increasing number of GPCs’. So what is decoding exactly? To define it, it is perceiving all graphemes in the word, matching the appropriate phonemes to them and blending them together to achieve the correct pronunciation. Thus, decoding a word is entirely dependent upon and DIRECTLY related to GPCs within it. It is not necessary to have a wide vocabulary or to know the meaning of a word to accurately decode it in most cases. I can accurately decode a passage in Italian without understanding one thing that I am reading. In fact, I have often had to learn to sing Italian arias with authentic pronunciation, whilst understanding little about the meaning of the song. Breadth of vocabulary can only ever be an indirect factor in facilitating the development of decoding skill, assisting with such matters as distinguishing between heteronyms and syllable stress.

    On the other hand, the teaching of an ever increasing number of GPCs – and thus facilitating the ability to decode and read an ever enlarging pool of words – can drive the development of vocabulary and language development. Although choirs and singing teachers have never required me to learn the meanings of Italian lyrics that I have sung, I have taken the initiative to find out of my own accord what certain words and phrases mean. I would suggest that any research evidence that suggests that vocabulary and language is a more important factor than GPCs in facilitating the ability to decode re-examine the possibility that assumptions have been made about the direction of the arrow of causality.

    Ruth Miskin developed her SP programme ReadWrite Inc and decodeable readers in Tower Hamlets in a school with apparently 99% of children from a non-English speaking background and with little or no English upon school entry. Previous teaching methods relying on the use of levelled readers/ ‘real’ books were not helping her students to either to learn to read or develop their English vocabulary and language skills. However, teaching GPCs and reading phonics books increased achievement in both reading and language. In an interview that I saw with her on television, she gave examples of how children decoded words that they didn’t know and then asked the meaning of the word. This is one example of how GPCs drive vocabulary acquisition. Having the word presented in the context of a story may also facilitate learning its meaning.

    Kind regards,
    Karina

  37. Julia Warren July 20, 2016 at 5:40 pm

    Thank you for your invitation, Laura Shapiro (and for prompting this excellent discussion, Professor Castles!)

    My research-prompting question is this:

    Does learning to spell words (accurately and automatically) increase a child’s sight-word vocabulary?

  38. Julia Warren July 20, 2016 at 6:07 pm

    Oops! Please can I rephrase my question to:

    ‘How effective is learning to spell words (accurately and automatically) in increasing sight-word vocabulary?’

    (I already know the answer to the first question. Apologies for my tired brain, it needs a holiday…)

    • Julia Warren July 27, 2016 at 12:04 pm

      Thank you, Anne.
      I hadn’t come across Shahar-Yames and Share’s research before – and I would be very interested to read a copy of the full paper.
      My email address is: jlw.literacy@yahoo.co.uk

Comments are closed.