Paul Gardner

One provocative question: what on earth does evidence-based really mean?

This post was written before Alan Tudge took leave from his position as the Minister for Education. But he’s not the only one to bang on about ‘evidence’ without really being clear what he means.

There can be little argument in wanting university Schools of Education to impart to their students, knowledge premised on systematically-acquired evidence. It is irrefutable that teacher educators want their students to leave university and enter the classroom confident in the delivery of best practices. However, the requirement for ‘evidence based-practice’ is in danger of becoming a political polemic in which knowledge may be obfuscated by ideology, rather than being the outcome of systematic investigation.  

Writing in The Australian,Paul Kelly ‘reflects’ on the then Federal Education Minister, Alan Tudge’s ‘drive’ to ensure universities impart, ‘…evidence-based practices such as phonics and explicit teaching instruction methodologies.’ The former Minister issues a warning that he will use, ‘the full leverage’ of the Federal Government’s $760m budget to insist, ‘…evidence-based practices are taught…’ in universities. Yet, the threat is based more on assumption that evidence-based practices are not being taught in our universities, than any substantial evidence that they are not. 

It is ironic the former Minister should argue for something on the basis of a lack of evidence. Aside from this point, questions arise around the nature of evidence the former Minister considers to be bona fide in relation to practice. This is an issue around which there is a distinct lack of clarity. The former Minister clearly states what he does not want, which includes: sociology and socio-cultural theory. His wish to see the demise of branches of thinking are questionable, given that it is usually dictatorial regimes that close down thought in their nation’s academies. He wants a tightly prescriptive curriculum for teacher education. In this respect, he appears to be following the Conservative administration of Boris Johnson in Britain, where a similar proposal has been tabled for English universities, resulting in some of the world’s top universities describing the plan as deeply flawed and having damaging consequences If Boris Johnson wants something and Oxford and Cambridge consider it fool-hardy, the weight of opinion surely tilts in favour the academies. 

The point remains as to the kind of ‘evidence’ upon which evidence-based practice is premised. What may pass as ‘evidence’ is not necessarily bona fide ‘knowledge’. All research, including educational research, involves knowledge that is acquired by means of rigorous, systematic investigation within clearly defined parameters. Even so, the outcomes of an investigation may be influenced by a number of factors, including: ontological perspective; the framing of the research questions; methodological approaches; analytical methods; researcher interpretation and the degree to which any funding body remains impartial. Ultimately, before it can take its place in the pantheon of evidence, research must be interrogated by means of independent peer-review and subsequently published in a highly respected discipline relevant journal. Even then, sometimes what may appear to be good evidence can prove to be disastrous in its outcomes. We do not know if the ‘evidence’ to which the former Minister refers, satisfies these requirements. What is certain is that the ‘evidence’ used by Paul Kelly to suggest universities are ‘failing’ their students and the nation’s schools, does not meet most of these standards of respected research. 

It was an Australian doctor, William McBride, who in 1961, published a letter in The Lancet, suggesting that thalidomide had negative consequences and drew attention to the possible fallacy of evidence. Randomised control trials (RCTs) of the drug in rats had proven effective for controlling for morning sickness, but it took observation of multiple cases to prove the drug was not fit for purpose. 

So, what kind of ‘evidence’ is being referred to by the former Minister when he rightly insists we need to ensure that pedagogy is evidence-based’. Is he referring to evidence derived from primary research, such as randomized control trials (RCTs) and observational studies; or secondary research, including systematic reviews of the research literature? The fact is there is no single type of evidence. It is generally recognised that different evidence types have different methodological strengths. At the pinnacle of the ‘hierarchy of evidence’, are systematic reviews, followed by RCTs, cohort studies and then case-controlled studies, case reports and finally expert opinion. Without identifying the type of evidence to which he refers, the former Minister, appears to resort to lay-opinion disguised as evidence. 

Without a clarity of thought, political policy, based on vague supposition, could lead to prescriptive measures that result in ‘damaging consequences’. As the thalidomide example cited above demonstrates, a single type of evidence is not always sufficient proof, and multiple types of evidence may be necessary to triangulate knowledge. Rather than denouncing certain disciplines of thought and prescribing others, perhaps the way forward is to systematically interrogate different types of evidence in order to evaluate their efficacy, as bona fide knowledge. The best way to do this is by means of teacher-academics and teacher-practitioners working collaboratively, across multiple settings, engaging in systematic research, and cross-referencing results. For this to happen, there needs to be a commitment by government to fund, not cut, educational research. Australia has some of the finest Schools of Education in the world; they are staffed by dedicated academics who want the best for their students and the best for the nation’s school children. What universities need is a knowledge-rich government, not political polemic that does not even reach the baseline of the ‘hierarchy of evidence’. 

                       

Paul Gardner is a Senior Lecturer in the School of Education at Curtin University. He is also the United Kingdom Literacy Association’s Country Ambassador to Australia. Paul specialises in research around writer identity; compositional processes and pedagogy. He has written about process drama, EAL/D, multicultural education and collaborative learning. His work is premised upon inclusion and social justice.  Twitter @Paugardner

The flawed thinking behind a mandatory phonics screening test

The New South Wales Government recently announced it intends to “trial an optional phonics screening test” for Year One students. This seems to be following a similar pattern to South Australia where the test, developed in the UK, was first trialled in 2017 and is now imposed on all public schools in the state.

The idea of a mandated universal phonics screening test for public schools is opposed by the NSW Teachers Federation, but is strongly advocated by neo-liberal ‘think tanks’, ‘edu-business’ leaders, speech specialists and cognitive psychologists. The controversy surrounding the test began in England, where it has been used since 2012. As in England, advocates of the test in Australia argue it is necessary as an early diagnosis of students’ early reading.

No teacher would dispute the importance of identifying students in need of early reading intervention, nor would they dispute the key role that phonics plays in decoding words. However I strongly believe the efficacy of the test deserves to be scrutinised, before it is rolled-out across our most populous state, and possibly all Australian public schools.  

Two questions deserve to be asked about the tests’ educational value. Firstly, is it worthwhile as a universal means of assessing students’ ability in reading, especially as it will be costly to implement? Secondly, does it make sense to assess students’ competence in reading by diagnosing their use of a single decoding strategy?

Perhaps these questions can be answered by interrogating the background to the test in England and by evaluating the extent to which it has been successful.       

What is in the test?

The test, which involves two stages, consists of 40 discrete words that the student reads to their teacher. They do so, by firstly identifying the individual letter-sound (grapho-phonic) correspondences, which they then blend (synthsise) in order to read the whole word. So, in fact what is specifically being tested is a synthetic phonic approach to reading, not a phonic approach per se. It could even be argued that calling the test a ‘phonics’ check is a misnomer since analytic phonics is not included.

Students pass the test by correctly synthesising the letter blends in thirty-two of the forty words.  In order to preserve fidelity to the strategy and to ensure students do not rely on word recognition skills, the test includes 20 pseudo words. In the version used in England, the first 12 words are nonsense words.

The back ground to the phonics screening check in England.  

We can trace the origins of the phonics screening check in England to two influential sources: ‘The Clackmannanshire Study’ and the ‘Rose Report’. In his 2006 report on early reading, Sir Jim Rose, drew heavily on a comparative study conducted by Rhona Johnston and Joyce Watson, in the small Scottish county of Clackmannanshire. After comparing achievements in reading of three groups of students taught using different phonic methods, the two researchers concluded that the group taught by means of synthetic phonics achieved significantly better results than either of two other groups. These other groups were taught by means of analytic phonics and a mixed methods approach. Although the study received little traction in Scotland and has subsequently been critiqued as methodologically flawed, it was warmly embraced in England, especially by Rose who was an advocate of synthetic phonics.             

The 2006 Rose Report was influential in shaping early reading pedagogy in England and from 2010 systematic synthetic phonics, not only became the exclusive method of teaching early reading in English schools, it was made statutory by the newly elected Conservative-Liberal Coalition under David Cameron. The then Education Secretary, Michael Gove, and his Schools’ Minister, Nick Gibb, announced a match funded scheme in which schools were required to purchase a synthetic phonics program. Included in the list of recommended programs was one owned by Gibb’s Literacy Advisor. This program is now used in 25% of English primary schools. In 2012, Gove introduced the phonics screening check for all Year One students (5-6 year olds) in England, and in 2017, Gibbs toured parts of Australia promoting the test here. 

To what extent has the Phonics Screening Check been successful?

In its first year, only 58% of UK students passed the test, but in subsequent years’ results have improved. Students who fail the test must re-sit it at the end of year Two. By 2016, 81% of Year One students passed the test, but since then there has only been an increase of 1%.

Gibb cites this increase in scores, over a six-year period, as proof that the government has raised standards in reading and advocates of the test in Australia have seized upon the data as evidence in support of their case.

At face value, the figures look impressive. However, when we compare phonics screening check results with Standard Assessment Test (the UK equivalent to NAPLAN) scores in reading for these students a year later, the results lose their shine. In 2012, 76% of Year Two students achieved the expected Standard Assessment Test level in reading, but last year only 75% achieved the same level. Clearly then, the phonics screening check is not indicative of general reading ability and does not serve as a purposeful diagnostic measure of reading.

In a recent survey of the usefulness of the phonics screening check in England, 98% of teachers said it did not tell them anything they did not already know about their students’ reading abilities. Following the first year of the test in 2012, when only 58% of students achieved the pass mark, teachers explained that it was their better readers who were failing the test. Although these students were successfully making the letter-sound correspondences in nonsense words, in the blending phase, they were reading real words that were similar to the visual appearance of the pseudo words.

The conclusion is that authentic reading combines decoding with meaning.

Furthermore, as every teacher knows, high status tests dominate curriculum content, which in this case, means that by giving greater attention to synthetic phonics, in order to get students’ through the test, there is less time to give to other reading strategies.

Whilst the systematic teaching of phonics has an important place in a teacher’s repertoire of strategies, it does not appear to make any sense to make it the exclusive method of teaching reading, as is the case in England. To give it a privileged status as a test does exactly that.

Perhaps this is the key reason why, in England, phonics screening check scores have improved but students’ reading abilities have not.

I don’t think Australia should be heading down the same dead-end path.

Dr. Paul Gardner is Senior Lecturer in Primary English, in the School of Education at Curtin University. Until 2014, he taught at several universities in the UK.