evidence-based policy

One provocative question: what on earth does evidence-based really mean?

This post was written before Alan Tudge took leave from his position as the Minister for Education. But he’s not the only one to bang on about ‘evidence’ without really being clear what he means.

There can be little argument in wanting university Schools of Education to impart to their students, knowledge premised on systematically-acquired evidence. It is irrefutable that teacher educators want their students to leave university and enter the classroom confident in the delivery of best practices. However, the requirement for ‘evidence based-practice’ is in danger of becoming a political polemic in which knowledge may be obfuscated by ideology, rather than being the outcome of systematic investigation.  

Writing in The Australian,Paul Kelly ‘reflects’ on the then Federal Education Minister, Alan Tudge’s ‘drive’ to ensure universities impart, ‘…evidence-based practices such as phonics and explicit teaching instruction methodologies.’ The former Minister issues a warning that he will use, ‘the full leverage’ of the Federal Government’s $760m budget to insist, ‘…evidence-based practices are taught…’ in universities. Yet, the threat is based more on assumption that evidence-based practices are not being taught in our universities, than any substantial evidence that they are not. 

It is ironic the former Minister should argue for something on the basis of a lack of evidence. Aside from this point, questions arise around the nature of evidence the former Minister considers to be bona fide in relation to practice. This is an issue around which there is a distinct lack of clarity. The former Minister clearly states what he does not want, which includes: sociology and socio-cultural theory. His wish to see the demise of branches of thinking are questionable, given that it is usually dictatorial regimes that close down thought in their nation’s academies. He wants a tightly prescriptive curriculum for teacher education. In this respect, he appears to be following the Conservative administration of Boris Johnson in Britain, where a similar proposal has been tabled for English universities, resulting in some of the world’s top universities describing the plan as deeply flawed and having damaging consequences If Boris Johnson wants something and Oxford and Cambridge consider it fool-hardy, the weight of opinion surely tilts in favour the academies. 

The point remains as to the kind of ‘evidence’ upon which evidence-based practice is premised. What may pass as ‘evidence’ is not necessarily bona fide ‘knowledge’. All research, including educational research, involves knowledge that is acquired by means of rigorous, systematic investigation within clearly defined parameters. Even so, the outcomes of an investigation may be influenced by a number of factors, including: ontological perspective; the framing of the research questions; methodological approaches; analytical methods; researcher interpretation and the degree to which any funding body remains impartial. Ultimately, before it can take its place in the pantheon of evidence, research must be interrogated by means of independent peer-review and subsequently published in a highly respected discipline relevant journal. Even then, sometimes what may appear to be good evidence can prove to be disastrous in its outcomes. We do not know if the ‘evidence’ to which the former Minister refers, satisfies these requirements. What is certain is that the ‘evidence’ used by Paul Kelly to suggest universities are ‘failing’ their students and the nation’s schools, does not meet most of these standards of respected research. 

It was an Australian doctor, William McBride, who in 1961, published a letter in The Lancet, suggesting that thalidomide had negative consequences and drew attention to the possible fallacy of evidence. Randomised control trials (RCTs) of the drug in rats had proven effective for controlling for morning sickness, but it took observation of multiple cases to prove the drug was not fit for purpose. 

So, what kind of ‘evidence’ is being referred to by the former Minister when he rightly insists we need to ensure that pedagogy is evidence-based’. Is he referring to evidence derived from primary research, such as randomized control trials (RCTs) and observational studies; or secondary research, including systematic reviews of the research literature? The fact is there is no single type of evidence. It is generally recognised that different evidence types have different methodological strengths. At the pinnacle of the ‘hierarchy of evidence’, are systematic reviews, followed by RCTs, cohort studies and then case-controlled studies, case reports and finally expert opinion. Without identifying the type of evidence to which he refers, the former Minister, appears to resort to lay-opinion disguised as evidence. 

Without a clarity of thought, political policy, based on vague supposition, could lead to prescriptive measures that result in ‘damaging consequences’. As the thalidomide example cited above demonstrates, a single type of evidence is not always sufficient proof, and multiple types of evidence may be necessary to triangulate knowledge. Rather than denouncing certain disciplines of thought and prescribing others, perhaps the way forward is to systematically interrogate different types of evidence in order to evaluate their efficacy, as bona fide knowledge. The best way to do this is by means of teacher-academics and teacher-practitioners working collaboratively, across multiple settings, engaging in systematic research, and cross-referencing results. For this to happen, there needs to be a commitment by government to fund, not cut, educational research. Australia has some of the finest Schools of Education in the world; they are staffed by dedicated academics who want the best for their students and the best for the nation’s school children. What universities need is a knowledge-rich government, not political polemic that does not even reach the baseline of the ‘hierarchy of evidence’. 

                       

Paul Gardner is a Senior Lecturer in the School of Education at Curtin University. He is also the United Kingdom Literacy Association’s Country Ambassador to Australia. Paul specialises in research around writer identity; compositional processes and pedagogy. He has written about process drama, EAL/D, multicultural education and collaborative learning. His work is premised upon inclusion and social justice.  Twitter @Paugardner

Using evidence to help build and evaluate good ideas in education technology

As researchers, we care that our educational systems improve, support all learners, and are grounded solidly in research evidence. But how do we work with stakeholders like educational technology startups to support effective use of that evidence? Researchers and practitioners worry about this, because we care about evaluating and scaling good ideas. By ‘scaling’ we mean adjusting and improving good ideas as they are rolled out and used.

Some common ways that people think about how we build evidence and scale innovations include:

  • taking approaches tested in controlled settings and implementing them
  • looking for ‘success stories’ and trying to copy lessons from them and
  • taking a systematic approach to analyse context for places to change and evaluating these changes, the Improvement Method.

Research on how we use evidence in policy and practice (and policy practice) can help inform us when we try to work with startups and other stakeholders on education projects. Professor of Politics and Public Policy at the University of Stirling in the UK, Paul Cairney, compares the three approaches in the table below.

Three approaches to evidence-based policy-making

Emulate Approach

In much work in education, we are looking to implement programs or technologies in contexts using an emulation approach; copying tested interventions. In our teaching that can also result in coming at research from a top down perspective, using key studies and methods but with a disconnect from local needs and context.

But these interventions are critiqued for their simplicity in the education context because they imply that interventions occur in a vacuum rather than in a complex context where we’ve already got lots of interventions going on. We might be evaluating a program that has already been implemented, and often our implementation process doesn’t follow this linear model.  

Storytelling Approach

The push back against the emulation approach is sometimes to instead focus very heavily on local context and storytelling approaches. This approach respects the expertise of professionals – which is important – but can result in key lessons not being distilled and shared, idiosyncratic ‘hit or miss’ practices, and ad hoc improvement cycles that may be driven by particular interests.

In the edtech space, much of the evaluation conducted by providers is based on testimonials. Although these can be useful, they’re typically not going to get at deeper issues of learning or help us evaluate our work. 

Improvement Methods

So, then, Improvement methods have been adopted in education systems, for example explicitly by the Carnegie Foundation, an independent research and policy centre in the US, and arguably in other forms such as Research Practice Partnerships (which are collaborative, long-term relationships between researchers and practitioners, designed to improve problems of practice in education) and other design based research approaches. Because these approaches work closely with practitioners to connect theory and real-world problems, they attempt to avoid ‘transmissive’ communication (one way communication) of research.

Our UCL EDUCATE project

At UCL (University College London) – which Simon recently visited while on sabbatical – the EDUCATE project has been created to help build a stronger evidence base in the EdTech sector. It uses this kind of approach. The approach is visualised through the ‘golden triangle’ connecting EdTech companies, entrepreneurs and start-ups with first-class business trainers, experts and mentors.

https://lh5.googleusercontent.com/rxgBpc9p3QFcI84iFhsNMfVQKihSpezX15eIFHz-v4HXTgpJqY8XBgQiKhoZuPfmePhnAm5w2GiuHkcvYJEwa3hJehhV_ELuRy1cxACayLhSl8IU4JPoe07rwx1T5DLLN6T9G9cF

The Golden Triangle of evidence-informed educational technology

The UCL EDUCATE project worked with 252 small to medium-sized enterprises (max 250 employees, <£5m annual turnover) in 12 cohorts between 2017-19.  The idea was to get EdTech creators, educators, investors and policy makers working together to understand what “works for learners and how to use technology to serve its users effectively.” As the program developed, it shifted from more general introductions to research methods and established research knowledge, to greater recognition that the nature of evidence is both varied and serves different purposes for enterprises at different stages of development.

The EDUCATE programme avoided the issue of transmissive or emulation-based research by building capacity in educational technology enterprises to conduct their own research, using theories of change to generate practical, robust, research. The aim, then, isn’t just to translate research into practice, or implement outcomes from RCTs, but to try and move from storytelling about products, to an improvement mindset. 

UTS Implementing Learning Analytics

In the work we’ve been conducting at the University of Technology Sydney we’ve taken a kind of improvement based approach, by looking at existing teaching practices, and seeking to augment those practices, rather than simply dropping in a new technology without understanding the context, or with a requirement for a particular type of teaching for it to be used.  Our focus is improvement-oriented innovation. This approach is intended to improve adoption and support existing good practices by learning from them and to amplify them through the technology. 

We believe it is important, when we think about the role of new technologies and approaches in education, to consider the way we use evidence. Understanding the different approaches – implementation, storytelling, or improvement – and how they work to achieve impact can be invaluable to all stakeholders.

Simon Knight is a Senior Lecturer at the Faculty of Transdisciplinary Innovation at the University of Technology, Sydney. His research investigates how people find and evaluate evidence, particularly in the context of learning and educator practices. Dr Knight received his Bachelor’s degree in Philosophy and Psychology from the University of Leeds before completing a teacher education program and Philosophy of Education MA at the UCL Institute of Education. Following teaching high school social sciences, Dr Knight completed an MPhil in Educational Research Methods at Cambridge, and PhD in Learning Analytics at the UK Open University. Simon is on Twitter @sjgknight

Anissa Moeini is a doctoral candidate at the UCL Knowledge Lab, Institute of Education, University College London, UK. As a seasoned tech entrepreneur, Anissa identified the need to build research capacity in edtech enterprises that is both agile to their pace of change and also adaptable to the rhythm of SMEs. Through her doctoral research she developed the Evidence-informed Learning Technology Enterprise Framework (ELTE) as a practical tool for edtech companies and other non-academic stakeholders (investors, policymakers and education practitioners)  to both evaluate the efficacy of edtech enterprises (i.e. their products and services) and to build capacity to be evidence-informed.  Anissa completed her MA at Teachers College, Columbia University in NY, USA and her iBBA at the Schulich School of Business in Toronto, Canada. She will be defending her doctoral dissertation in 2020. Anissa is on Twitter @AnissaMoeini

Alison Clark-Wilson is a Principal Research Fellow at UCL Knowledge Lab, UCL Institute of Education, London. Her research spans the EdTech sector with a particular emphasis on the design, implementation and evaluation of technology in real school settings. Dr Clark-Wilson received a Bachelor’s degree in Chemical Engineering prior to becoming a secondary school mathematics teacher in the early 1990s. Her 30-year career has spanned school, university and industry-based education contexts. Dr Clark-Wilson completed a MA at the University of Chichester and a PhD from UCL Institute of Education, both in mathematics education. Alison is on Twitter @Aliclarkwilson

 

National Evidence Institute: let’s push for quality of USE, not just quality of evidence

The use of evidence in education has never had a higher profile. In Australia much interest in school evidence use has been sparked by the promise to develop an ‘independent national evidence institute’ as part of the recent National School Reform Agreement. It is how this institute will work that interests me.

The aim of the institute, as the agreement puts it, will be to make sense of what works in improving school outcomes and – most importantly – then translate this research into practical resources that can be used by classroom teachers and school leaders

As we are still in the planning stages, based on research on evidence use elsewhere in the world, I believe there are qualities and characteristics that educators could be pushing to help ensure the institute will work well for them.

1. We need a national evidence institute that focuses on use as well as evidence

Any evidence institute faces the risk of being drawn into expectations of just providing evidence rather than supporting how teachers and educational leaders will use that evidence. Arguably this latter process is far more important and difficult to establish.

The experiences of the What Works Centres in the UK are illustrative. A recent analysis of their work against their three main aims (generating evidence, translating evidence, and supporting evidence adoption), found that there was far less activity around the evidence adoption relative to evidence generation and translation.

 It seems, then, that it is all too easy for evidence centres to slip into ‘a research production (push) approach to the use of research, rather than problem-solving, demand-led (pull) approach’.

I believe in Australia there is an exciting opportunity for the Australian evidence institute to take a different approach, to learn from experiences in the UK and elsewhere, and articulate the explicit aspiration to be a national evidence use institute. 

2. We need a national evidence institute that supports quality of use as well as quality of evidence

A Monash University project designed specifically to improve the use of research evidence in Australian schools, the Q Project, has argued that discussions about quality in relation to evidence use have focused almost exclusively on the quality of the evidence, but not the quality of its use. While there has been long-standing debate about what counts as quality evidence, deliberation about what counts as quality use has been much more limited.

This situation is changing as awareness and understanding of ‘quality use’ starts to grow. Work within and beyond education is beginning to suggest that quality use involves not only appropriate, rigorous evidence but also thoughtful, critical use of that evidence within decision-making processes that are transparent and accountable.

In addition, it requires not only relevant skills and understandings but also inquiry mindsets and relationships of respect and challenge. Fostering the development of these kinds of characteristics therefore offers a distinctive opportunity for the Australian national evidence institute to be part of supporting not only high-quality evidence but also high-quality use.

As head of one of the What Works Centres in the UK, Sir Kevan Collins, reasoned recently: ‘Used intelligently, evidence is the teacher’s friend’.

Helping to work out what it means to use evidence intelligently in Australian schools and school systems needs to be a key priority for any new national evidence institute.

3. We need a national evidence institute that frames everything around improvement

Another key mission for a new national evidence institute should be establishing clarity about the relationship between using evidence and improving education. As Harvard professor, Carol Weiss, argued thirty years ago, educators would do well to stop thinking about ‘How can we increase the use of research in decision making?’ and focus instead on ‘How can we make wiser decisions, and to what extent, in what ways, and under what conditions, can social research help?’. 

These two questions are subtly (but significantly) different because they shift the focus from increasing the impact of research to supporting the improvement of practice, reminding us that evidence use is a means to an end, not an end in itself.

To me this suggests that the underlying purpose of a national evidence institute should be to support educational improvement and to make clear the distinctive contribution that using evidence can make to realising this aspiration.  Work in the US on improving the use of research evidence talks about ‘advancing the use of research evidence in ways that benefit youth’. The final five words of this statement are the most important ones, as they make clear the ‘So what?’ of evidence use. 

4. We need a national evidence institute that follows an ethos of ‘less is more’

In a world of information abundance it is increasingly being argued that it is most effective to focus ‘on a small number of carefully selected and optimised activities that strongly support things you value, and then happily miss out on everything else’ (as professor of computer science at Georgetown University in the US, Cal Newport, reminds us). This is potentially good advice when it comes to educational evidence use.

Indeed, there are a number of ways in which a national evidence institute would benefit from being intentionally minimalist or ‘less is more’ in its approach.

First, is the benefit of being very clear about the limits as well as the potential of evidence –as emeritus professor of educational assessment at University College London, Dylan Wiliam, argues, ‘Evidence is important, but what is more important is […] teacher expertise and professionalism [to] make better judgments about when, and how, to use research’.

Second, is the benefit of being clear which educational challenges to focus on – such as by developing processes to identify practice in need of evidence or by developing evidence-based guidance on high-priority issues with strong evidence but inconsistent practice.

Finally, is the benefit of using evidence to help people and organisations to know what to stop doing. In this respect, a national evidence institute needs to show leadership not only in focusing on ‘a small number of carefully selected and optimised activities’ but also in ‘happily missing out on everything else’.

Against the backdrop of calls for a research-rich teaching profession in Australia, the establishment of an independent national evidence institute represents a unique opportunity to better understand and support evidence-informed improvement in our schools. Let’s work together to realise this potential.

Mark Rickinson is an Associate Professor in the Faculty of Education at Monash University in Melbourne. Mark’s work is focused on improving the use and usefulness of educational research in policy and practice. He is currently leading a new 5-year initiative (The Q Project) to improve the use of research evidence in Australian schools. Mark is on Twitter @mark_rickinson

The above blog post is based on a presentation Mark gave as part of a panel discussion event about the National Evidence Institute on 26 July 2019 organised by the ‘Schools and Education Systems’ Special Interest Group (SIG) of the Australian Association of Research in Education (AARE).  

The problem with using scientific evidence in education (why teachers should stop trying to be more like doctors)

For teachers to be like doctors, and base practice on more “scientific” research, might seem like a good idea. But medical doctors are already questioning the narrow reliance in medicine on randomised controlled trials that Australia seems intent on implementing in education.

In randomised controlled trials of new drugs, researchers get two groups of comparable people with a specific problem and give one group the new drug and the other group the old drug or a placebo.  No one knows who gets what. Not the doctor, not the patient and not the person assessing the outcomes. Then statistical analysis of the results informs guidelines for clinical practice. 

In education, though, students are very different from each other. Unlike those administering placebos and real drugs in a medical trial, teachers know if they are delivering an intervention. Students know they are getting one thing or another. The person assessing the situation knows an intervention has taken place. Constructing a reliable educational randomised controlled trial is highly problematic and open to bias.

As a doctor and teacher thinking, writing and researching together we believe that a more honest understanding of the ambivalences and failures of evidence-based medicine is essential for education.

Before Australia decides teachers need to be like doctors, we want to tell you what is happening and give you some reasons why evidence based medicine itself is said to be in crisis

1. Randomised controlled trials are just one kind of evidence

Medicine now recognises a much broader evidence base than just randomised controlled trials. Other kinds of medical evidence include: practical “on-the-job” expertise; professional knowledge; insights provided by other research such as case studies; intuition; wisdom gained from listening to patient histories and discussions with patients that allow for shared decision-making or negotiation.

Privileging randomised controlled trials allows them to become sticks that beat practitioners into uniformity of practice, no matter what their patients want or need. Such practitioners become “cookbook” doctors or, in education, potentially, “cookbook” teachers. The best and most recent forms of evidence based medicine value a broad range of evidence and do not create hierarchies of evidence. Education policy needs to consider this carefully and treat all forms of evidence equally.

2. Medicine can be used as a bully

Teaching is a feminised profession, with a much lower status than medicine. It is easy for science to exert a masculinist authority over teachers, who are required to be ever more scientific to seem professional.  They are called on to be phallic teachers, using data, tools, tests, rubrics, standards, benchmarks, probes and scientific trials, rather than “soft” skills of listening, empathising, reflecting and sharing.

A Western scientific evidence-base for practice similarly does not value Indigenous knowledges or philosophies of learning. Externally mandated guidelines also negate the concepts of student voice and negotiated curriculum. While confident doctors know the randomised controlled trial-based statistics and effect sizes need to be read with scepticism, this is not so easy for many teachers. If randomised controlled trial-based guidelines are to rule teaching, teachers will also potentially be monitored for compliance with guidelines they may not fully understand or accept, and which may potentially harm their students.

3. Evidence based medicine is about populations, not people

While medical randomised controlled trials save lives by demonstrating the broad effects of interventions, they make individuals and their needs harder to perceive and respect.  Randomised controlled trial-based guidelines can mean that diverse people are forced to conform to simplistic ideals. Rather than starting with the patient, the doctor starts with the rule. Is this what we want for teaching? When medical guidelines are applied in rigid ways, patients can be harmed.

Trials cannot be done on every single kind of person and so inevitably, many individuals are forced to have treatments that will not benefit them at all, or that are at odds with their wishes and beliefs. Educators need to ensure that teachers, not bureaucrats or researchers, remain the authority in their classrooms.

5. Scientific evidence gives rise to gurus

Evidence-based practice can give rise to the cult of the guru. Researchers such as John Hattie, and their trademarked programs like “Visible Learning” based on apparently infallible science, can rapidly colonise and dominate education. Yet their medicalised glamour disguises the reality that there is no universal and enduring formula for “what works”.

In 2009, in his book Visible learning: A synthesis of over 800 meta-analyses relating to achievement Hattie advised that, based on evidence, all healthy people should take aspirin to prevent heart attacks. Yet also in 2009, new medical evidence “proved” that the harms in healthy people taking aspirin outweigh the benefits.

In 2009 Hattie said class size does not matter. In 2014, further research found that reducing class size has an important and lasting impact, especially for students from disadvantaged backgrounds.

While medical-style guidelines may seem to have come from God, such guidelines, even in medicine are often multiple and contradictory. The “cookbook” teacher will always be chasing the latest guideline, disempowered by top-down interference in the classroom.

In medicine, over five years, fifty percent of guideline recommendations are overturned by new evidence. A comparable situation in education would create unimaginable turmoil for teachers.

6. Evidence-based practice risks conflicts of interest

Educational publishers and platforms are very interested in “scientific” evidence.  If a researcher can “prove” an intervention works and should be applied to all, this means big dollars. Randomised controlled trials in medicine routinely produce outcomes that are to the benefit of industry. Only certain trials get funded. Much unfavourable research is never published. Drug and medical companies set agendas rather than responding to patient needs, in what has been described as a guideline “factory”.

Imagine how this will play out in education. Do we want what happens in classrooms to be dictated by profit driven companies, or student-centred teachers?

What needs to happen?

We call for an urgent halt to the imposition of ‘evidence-based’ education on Australian teachers, until there a fuller understanding of the benefits and costs of narrow, statistical evidence-based practice. In particular, education needs protection from the likely exploitation of evidence-based guidelines by industries with vested interests.

Rather than removing teacher agency and enforcing subordination to gurus and data-based cults, education needs to embrace a wide range of evidence and reinstate the teacher as the expert who decides whether or not a guideline applies to each student.

Pretending teachers are doctors, without acknowledging the risks and costs of this, leaves students consigned to boring, standardised and ineffective cookbook teaching. Do we want teachers to start with a recipe, or the person in front of them?

Here is our paper for those who want more: A broken paradigm? What education needs to learn from evidence-based medicine by Lucinda McKnight and Andy Morgan

Dr Lucinda McKnight is a pre-service teacher educator and senior lecturer in pedagogy and curriculum at Deakin University, Melbourne. She is also a qualified health and fitness professional. She is interested in the use of scientific and medical metaphor in education. Lucinda can be found on Twitter@LucindaMcKnigh8

Dr Andy Morgan is a British Australian medical doctor and senior lecturer in general practice at Monash University, Melbourne. He has an MA in Clinical Education from the Institute of Education, UCL, London. His research interests are in consultation skills and patient-centred care. He is a former fellow of the Royal College of General Practitioners, and current fellow of the Australian Royal College of General Practitioners.



What’s good ‘evidence-based’ practice for classrooms? We asked the teachers, here’s what they said

Calls for Australian schools and teachers to engage in ‘evidence-based practice’ have become increasingly loud over the past decade. Like ‘quality’, it’s hard to argue against evidence or the use of evidence in education, but also like ‘quality’, the devil’s in the detail: much depends on what we mean by ‘evidence’, what counts as ‘evidence’, and who gets to say what constitutes good ‘evidence’ of practice.

In this post we want to tell you about the conversations around what ‘evidence’ means when people talk about evidence-based practice in Australian schools, and importantly we want to tell you about our research into what teachers think good evidence is.

Often when people talk about ‘evidence’ in education they are talking about two different types of evidence. The first is the evidence of teacher professional judgment collected and used at classroom level involving things like student feedback and teacher self-assessment. The second is ‘objective’ or clinical evidence collected by tools like system-wide standardised tests.

Evidence of teacher professional judgment

This type of evidence is represented in the Australian Teacher Performance and Development Framework. For example, the framework suggests that good evidence of teachers’ practice is rich and complex, requiring that teachers possess and use sharp and well-honed professional judgement. It says: “an important part of effective professional practice is collecting evidence that provides the basis for ongoing feedback, reflection and further development. The complex work of teaching generates a rich and varied range of evidence that can inform meaningful evaluations of practice for both formative and summative purposes” (p.6). It goes on to suggest that sources of this kind of evidence might include observation, student feedback, parent feedback and teacher self-assessment and reflection, among others.

‘Objective’ evidence

The second discussion around evidence promotes good evidence of practice as something that should be ‘objective’ or clinical, something that should be independent of the ‘subjectivity’ of teacher judgement. We see this reflected in, for example, the much lauded “formative assessment tool” announced in the wake of Gonski 2.0 and to be developed by KPMG. The tool will track every child and ‘sound alarms’ if a child is slipping behind. It aims to remedy the purportedly unreliable nature of assessment of student learning that hasn’t been validated by standardising formative assessment practices. Indeed, the Gonski 2.0 report is very strongly imbued with the idea that evidence of learning that relies on teacher professional judgement is in need of being overridden by more objective measures.  

But what do teachers themselves think good evidence is?

We’ve been talking to teachers about their understanding and use of evidence, as part of our Teachers, Educational Data and Evidence-informed Practice project. We began with 21 interviews with teachers and school leaders in mid-2018, and have recently run an online questionnaire that gained over 500 responses from primary and secondary teachers around Australia.

Our research shows that teachers clearly think deeply about what constitutes good evidence of their practice. For many of them, the fact that students are engaged in their learning provides the best evidence of good teaching. Teachers were very expansive and articulate about what the indicators of such engagement are:

I know I’m teaching well based on how well my students synthesise their knowledge and readily apply it in different contexts. Also by the quality of their questions they ask me and each other in class. They come prepared to debate. Also when they help each other and are not afraid to take risks. When they send me essays and ideas they might be thinking about. Essentially I know I’m teaching well because the relationship is positive and students can articulate what they’re doing, why they’re doing it and can also show they understand, by teaching their peers. (Secondary teacher, NSW)

Furthermore, teachers know that ‘assessment’ is not something that stands independent of them – that the very act of using evidence to inform practice involves judgement. Their role in knowing their students, knowing about learning, and assessing and supporting their students to increase their knowledge and understanding is crucial. Balanced and thoughtful assessment of student learning relies on knowledge of how to assess, and of what constitutes good evidence.

Good evidence is gathering a range of pieces of student work to use to arrive at a balanced assessment. I believe I am teaching well when the student data shows learning and good outcomes. (Primary teacher, SA)

Gathering good evidence of teaching and learning is an iterative process, that is it is a process of evaluating and adjusting that teachers constantly repeat and build on. It is part of the very fabric of teaching, and something that good teachers do every day in order to make decisions about what needs to happen next.

I use strategies like exit cards sometimes to find out about content knowledge and also to hear questions from students about what they still need to know/understand. I use questioning strategies in class and make judgements based on the answers or further questions of my students. (Secondary teacher, Vic)

I get immediate feedback each class from my students.  I know them well and can see when they are engaged and learning and when I’m having very little effect. (Secondary teacher, Qld)

Where does NAPLAN sit as ‘evidence’ for teachers?

Teachers are not afraid to reflect on and gather evidence of their practice, but too often, calls for ‘evidence-based practice’ in education ignore the evidence that really counts. Narrow definitions of evidence where it is linked to external testing are highly problematic. While external testing is part of the puzzle, it can be harmful to use that evidence for purposes beyond what it can really tell us – as one of us has argued before. And the teachers in our study well understood this. For them, NAPLAN data, for instance, was bottom of the list when it comes to evidence of their practice, as seen in the chart below.

This doesn’t mean they discount the potentially, perhaps partially, informative value in such testing (after all, about 72% think it’s at least a ‘somewhat’ valid and reliable form of evidence), but it does mean that, in their view, the best evidence is that which is tied to the day to day work that goes on in their classrooms.

Evidence rated from not useful to extremely useful by teachers in our survey

Teachers value a range of sources of evidence of their practice, placing particular emphasis on that which has a front row seat to their work, their own reflections and observations, and those of the students they teach. Perhaps this is because they need this constant stream of information to enable them to make the thousands of decisions they make about their practice in the course of a day – or an hour, or a minute. The ‘complex work of teaching’ does not need a formalised, ‘objective’ tool to help it along. Instead, we need to properly recognise the complexity of teaching, and the inherent, interwoven necessity of teacher judgement that makes it what it is.

What do teachers want?

Teachers were very clear about what they didn’t want.

Teachers are time poor. We are tired. It sounds good to do all this extra stuff but unless we are given more time it will just be another layer of pressure. (Secondary teacher, NSW)

Teachers believe in and want to rely on useful data but they don’t have the time to do it well. (Primary teacher, NSW)

It must be practical, helpful and not EXTRA. (Primary teacher, Vic)

They don’t want “extra stuff” to do.

They want relevant, high quality and localised professional learning. They want to better understand and work with a range of forms of useful data and research. They particularly find in-school teacher research with support useful, along with access to curated readings with classroom value. Social media also features as a useful tool for teachers.

Our research is ongoing. Our next task is to work further with teachers to develop and refine resources to support them in these endeavours.

We believe teachers should be heard more clearly in the conversations about evidence; policy makers and other decision-makers need to listen to teachers. The type of evidence that teachers want and can use should be basic to any plan around ‘evidence-based’ or ‘evidence-informed’ teaching in Australian schools.

Dr Nicole Mockler is Associate Professor of Education, at the Sydney School of  Education and Social Work at the University of Sydney. She is a former teacher and  school leader, and her research and writing primarily focuses on education policy and  politics and teacher professional identity and learning. Her recent scholarly books  include Questioning the Language of improvement and reform in education: Reclaiming  meaning (Routledge, 2018) and Engaging with student voice in research, education and  community: Beyond legitimation and guardianship (Springer 2015), both co-authored  with Susan Groundwater-Smith. Nicole is currently Editor in Chief of The Australian  Educational Researcher.Nicole is on Twitter @nicolemockler

Dr Meghan Stacey is a lecturer in the sociology of education and education policy in the School of Education at the University of New South Wales. Taking a particular interest in teachers, her research considers how teachers’ work is framed by policy, as well as the effects of such policy for those who work with, within and against it. Meghan completed her PhD with the University of Sydney in 2018. Meghan is on Twitter@meghanrstacey

QandA:‘what works’ in ed with Bob Lingard, Jessica Gerrard, Adrian Piccoli, Rob Randall,Glenn Savage (chair)

See the full video here

Evidence, expertise and influence are increasingly contested in the making of Australian schooling policy.

More than ever, policy makers, researchers and practitioners are being asked to defend the evidence they use, justify why the voices of some experts are given preference over others, and be critically aware of the networks of influence that determine what counts as evidence and expertise.

The release of the ‘Gonski 2.0’ report raises a number of complex questions about the use of evidence in the development of schooling policies, and the forms of expertise and influence that are increasingly dominant in shaping conversations about the trajectory of schooling reform.

The report signals an ever-increasing presence of federal government influence in shaping schooling policy in Australia’s federal system. It also strongly reflects global shifts towards a “what works” reform narrative, which frames policy decisions as only justifiable in cases where there is evidence of demonstrable impact.

Proposals such as the creation of a ‘national research and evidence institute’ by the Labor party, and related proposals by the Australian Productivity Commission to create a national ‘education evidence base’, signal a potentially new era of policy making in Australia, in which decisions are guided by new national data infrastructures and hierarchies of evidence.

These developments raise serious questions about which kinds of evidence will count (and can be counted) in emerging evidence repositories, which experts (and forms of expertise) will be able to gain most traction, how developments might change the roles of federal, state and national agencies in contributing to evidence production, and the kinds of research knowledge that will (or will not) be able to gain tradition in national debates.

On November 6th, I hosted a Q&A Forum at the University of Sydney, co-sponsored by the AARE ‘Politics and Policy in Education’ Special Interest Group and the School and Teacher Education Policy Research Network at the University of Sydney.

It featured Adrian Piccoli (Director of the UNSW Gonski Institute for Education), Jessica Gerrard (senior lecturer in education, equity and politics at the University of Melbourne), Bob Lingard (Emeritus Professor at the University of Queensland and Professorial Research Fellow at the Australian Catholic University) and Rob Randall (CEO of the Australian Curriculum, Assessment and Reporting Authority).

What follows is an edited version of the event, featuring some key questions I posed to the panelists and some of their highlight responses.

See the full video here

Glenn: I want to start by considering the changing role and meaning of ‘evidence’ and how different forms of evidence shape conditions of possibility for education. What do you see as either the limits or possibilities of “what works” and “evidence-based” approaches to schooling reform?

Bob: It seems to me the ‘what works’ idea works with a sort of engineering conception of the relationship between evidence, research, policy making and professional practice in schools, and I think it also over simplifies research and evidence … I would prefer a relationship between evidence (and evidences of multiple kinds) to policy and to practice which was more of an enlightenment relationship rather than an engineering one … I think policy making and professional practice are really complex practices, and I think we can only ever have evidence-informed policy and evidence-informed professional practice, I don’t think we can have evidence-based … I think ‘what works’ has an almost inert clinical construction of practice. And I think there’s an arrogant certainty.

Adrian: The problem with the ‘what works’ movement is that it lends itself, particularly at a political level, to there being a ‘silver bullet’ to education improvement and the thing you launch the silver bullet on is a press release. I’ve always said the press release is the greatest threat to good education policy because it sounds good, in the lead up to an election, to say things like ‘independent public schools work’ so fund them, or it might be a phonics check, so let’s fund this because it works, but I think it lends itself to that kind of one-dimensional approach to education policy. But education reform is an art. What makes the painting great? It’s not the blue or the yellow or the red, it’s actually the right combination of those things. Education, at a political level, people can try to boil it down to things that are too simple.

Rob: I actually think the term [what works] is a useful term. If I go back to when I first started teaching, it’s a good question, ‘what works?’ Can you give me some leads? It’s not a matter of saying ‘this is it entirely’, but we’ve got to be careful of how the language enables us and not continue to diss it.

Glenn: NSW has created its Centre for Education Statistics and Evaluation, which describes itself as Australia’s first ‘data hub’ in education that will tell us “what works” in schools and ensure decisions are evidence-informed. On the Centre’s website, it tells us that NSW works with the concept of ‘an evidence hierarchy’. On top of the hierarchy is ‘the gold standard’, which includes either ‘meta analyses’ or ‘randomised controlled trials’. To me this begs a question: how might the role of researchers be shifting now ‘the best’ evidence is primarily based on large-scale and quantitative methods?

Jess: To me it’s a funny situation to be in when your bread and butter work is producing knowledge and evidence but you find yourself arguing against the framing and enthusiastic update of something like ‘evidence-based policy’. Particularly concerning is this hierarchical organisation of evidences where randomised controlled trials, statistical knowledge and other things like meta analyses are thought to be more certain, more robust, more concrete than other forms of research knowledge, such as qualitative in-depth interviews with school teachers about their experiences. The kind of knowledge that is produced through a statistical or very particular causal project becomes very narrow because it has to bracket out so many other contextual factors in order to produce ‘a certainty’ about social phenomena. We can’t rely on a medical model, where RCTs come from, for something like classroom practice, and you can see this in John Hattie’s very influential book Visible Learning. You just have to look at the Preface where he says that he bracketed out of his study any factor that was out of school. When you think about that it becomes unsurprising that the biggest finding is that teachers have the most impact, because you’ve bracketed out all these other things that clearly have an impact … With the relationship between politics and policy, I think it’s really interesting that, politically speaking, evidence-based policy becomes very popular around some reforms, yet not around other reforms, so school autonomy, great example, there’s no evidence to say that has a positive impact on student achievement but yet it gets rolled out, there’s no RCT on that, there’s no RCT on the funding of elite private schools, but yet we do these things. I think we can get into a trap of ‘policy-led evidence’ when political interests try to wrestle evidence for their own purposes.

Glenn: Let’s consider which ‘experts’ tend to exert the most influence in schooling. For example, a common claim is that some groups and individuals might get more of a say than others in steering debates about schooling. In other words, not everyone ‘gets a seat at the table’ when decisions are made – and if they do, voices are not always equally heard. A frequent criticism, for example, is that certain thinks tanks or lobby groups, or certain powerful and well-connected individuals, are often able to exert disproportionate power and influence. Would any of you like to comment on those dynamics and the claim that it might not be an even playing field of influence?

Bob: I think ‘think tank research’ is very different from the kind of research that’s done by academics in universities. The think tank usually has a political-ideological position, it usually takes the policy problem as given rather than thinking about the construction, I think it does research and writes reports which have specific audiences in mind, one the media and two the politicians. I remember once when I did a report for a government and the minister told me my problem was that I was ‘two-handed’. I’d say ‘on the one hand this might be the case, and on the other hand…’, but what he wanted was one-handed research advice, and I think in some ways the think tanks, that’s what they do.    

Glenn: Another important dimension here is that even when one’s voice is heard, often what ‘the public’ hears is far from the full story. And I think this is where we need to consider the role of the media and the 24-hour news cycle we now inhabit. For example, so much of what we hear about ‘the evidence’ driving schooling reform is filtered through the media; but this is invariably a selective version of the evidence. Do any of you have any thoughts or reflections on this complex dynamic between the media, experts, evidence and policy?

Adrian: Good education policy is really boring, right? It’s boring for the Daily Telegraph, it’s boring for the Sydney Morning Herald, it’s boring for the ABC, Channel 7, it’s boring. You talk curriculum, you talk assessment, you talk pedagogy, I mean when was the last time you saw the ‘pedagogy’ word in a news article? … what’s exciting is ‘you know what, here’s the silver bullet’ … and the public and media and the political process doesn’t have the patience for sound evidence-based education reform.

Rob: I think we’re at risk of underestimating the capability of the profession in terms of interpreting and engaging with this. I think we’re at risk of under-estimating the broader community.

Glenn: To me, it seems there’s something peculiar in terms of how expertise about education is constructed. For example, in the medical profession, many would see the expertise as lying with the practitioners themselves, the doctors, surgeons, and so on, who “possess” the expertise and are, therefore, the experts. If education mirrored this, then surely the experts would be the teachers and school leaders – and expertise would lie in their hands? But this often seems to be far from the way expertise is talked about in schooling. Instead, it seems the experts are often the economists, statisticians and global policy entrepreneurs who have little to do with schools. Why is it that the profession itself seems to so often be obscured in debates about expertise and schooling reform?

Jess: What we see now is because education and schooling is such a politically invested enterprise, with huge money attached to it, it’s never really been wrestled from the hands of government in terms of a professional body. So, a body like AITSL, for instance, which is meant to stand in as a kind of professional body, isn’t really representative of the profession, it doesn’t have those kinds of links to teachers themselves as the medical equivalent does. So, we’re in a curious state of affairs, I think you’re right Glenn, where who counts as having expertise are often not those who are within the street level, within the profession … We don’t have enough of an opportunity to hear from teachers themselves, to have unions and teachers as part of the public discussion, and when they are a part of the discussion they’re often positioned as being argumentative or troublesome as opposed to contributing to a robust public debate about education.

Bob: As we’ve moved into the kind of economies we have, the emphasis on schooling as human capital and so on, it is those away from schooling, the economists and others, who I think have formulated the big macro policy, rather than the knowledge of the profession.

Glenn: Up to this point we’ve been mainly talking about influence in terms of specific individuals, or groups, but also I think certain policies and forms of data also exert significant influence. I need only mention the term NAPLAN in front of a group of educators to inspire a flood of conversations (and often polarised opinion) about how this particular policy and its associated data influence their work. Is it a stretch to say that these policy technologies and data infrastructures now serve as political actors in their own right? Is there a risk when we start seeing data itself as a “source of truth” beyond the politics of its creation?

Jess: I think it’s absolutely seen in that way and I think that’s the problem with the hierarchy of knowledge or evidence. There’s a presumption that these so-called higher or more stable forms of knowledge can stand above the messiness of everyday life in schools or the complexity of social and cultural phenomena … there’s no way a number can convey the complexity, but because they seem so tantalisingly certain, they then have a life of themselves.

Adrian: NAPLAN is the King Kong of education policy because it started off relatively harmless on this little island and now it’s ripping down buildings and swatting away airplanes. I mean it’s just become this dominant thing in public discourse around education.    

Rob: Let’s not get naïve about how people are using it [NAPLAN]. People use the data in a whole range of ways. It’s not that it’s good on one side and bad on the other … now if we want to, we could take the data away, or we could actually say, ‘let’s have a more complete discussion about it’ … give parents the respect they deserve, I do not accept that there’s a whole bunch of parents out there choosing schools on the basis of NAPLAN results.

Glenn: To finish tonight, I want to pose a final ‘big sky’ question. The question is: If you had the power to change one thing about how the politics of evidence, expertise or influence work in Australian schooling policy, what would that be?

Bob: I would want to give emphasis to valuing teacher professional judgment within the use of data and have that as a central element rather than having the data driving.

Adrian: I would make it a legal requirement that systems and governments have to put the interests of child ahead of the interests of adults in education policy.

Jess: I think I’m going to give a sociologist’s answer, which is to say that I think what I would want to see is greater political commitment to acknowledging the actual power that is held in the current production of data and the strategic use of that. The discussion also needs to address the ethical and political dimensions of education and schooling beyond what data can tell us.

Rob: I would like to pursue the argument about increasing the respect and nature, the acknowledgment of, and the expectation of, the profession … I think there is a whole bunch of teachers out there who do a fantastic job … given their fundamental importance to the community, to the wellbeing of this country going forward I’d be upping the ante for the respect for and expectation of teachers.

See the full video here

Glenn C. Savage is a senior lecturer in education policy and sociology of education at the University of Western Australia. His research focuses on education policy, politics and governance at national and global levels, with a specific interest in federalism and national schooling reform. He currently holds an Australian Research Council ‘Discovery Early Career Research Award’ (DECRA) for his project titled ‘National schooling reform and the reshaping of Australian federalism’(2016-2019).

Here’s what is going wrong with ‘evidence-based’ policies and practices in schools in Australia

An academic‘s job is, quite often, to name what others might not see. Scholars of school reform in particular are used to seeing paradoxes and ironies. The contradictions we come across are a source of intellectual intrigue, theoretical development and at times, humour. But the point of naming them in our work is often a fairly simple attempt to get policy actors and teachers to see what they might not see when they are in the midst of their daily work. After all, one of the advantages of being in ‘the Ivory Tower’ is having the opportunity to see larger, longer-term patterns of human behaviour.

This blog is an attempt to continue this line of endeavour. Here I would like to point out some contradictions in current public rhetoric about the relationship between educational research and schooling – focusing on teaching practices and curriculum for the moment.

The call for ‘evidenced-based’ practice in schools

By now we have all seen repeated calls for policy and practice to be ‘evidence-based’. On the one hand, this is common sense – a call to restrain the well-known tendency of educational reforms to fervently push one fad after another, based mostly on beliefs and normative appeals (that is messages that indicate what one should or should not do in a certain situation). And let’s be honest, these often get tangled in party political debates – between ostensible conservatives and supposed progressives. The reality is that both sides are guilty of pushing reforms with either no serious empirical bases or half-baked re-interpretation of research – and both claiming authority based on that ‘research.’ Of course, not all high quality research is empirical – nor should it all be – but the appeal to evidence as a way of moving beyond stalemate is not without merit. Calling for empirical adjudication or verification does provide a pathway to establish more secure bases for justifying what reforms and practices ought to be implemented.

There are a number of ways in which we already know empirical analysis can now move educational reform further, because we can name very common educational practices for which we have ample evidence that the effects of those practices are not what advocates intended. For example, there is ample evidence that NAPLAN has been implemented in a manner that directly contradicts what some of its advocates intended; but the empirical experience has been that NAPLAN has become far more high-stakes than intended and has carried the consequences of narrowing curriculum, a consequence its early advocates said would not happen. (Never mind that many of us predicted this. That’s another story.) This is an example of where empirical research can serve the vital role of assessing the difference between intended and experienced results.

Good research can turn into zealous advocacy

So on a general level, the case for evidence-based practice has a definite value. But let’s not over-extend this general appeal, because we also have plenty of experience of seeing good research turn into zealous advocacy with dubious intent and consequence. The current over-extensions of the empirical appeal have led paradigmatic warriors to push the authority of their work well beyond its actual capacity to inform educational practice. Here, let me name two forms of this over-extension.

Synthetic reviews

Take the contemporary appeal to summarise studies of specific practices as a means of deciphering which practices offer the most promise in practice. (This is called a ‘synthetic review’. John Hattie’s well-known work would be an example). There are, of course, many ways to conduct synthetic reviews of previous research – but we all know the statistical appeal of meta-analyses, based on one form or another of aggregating effect sizes reported in research, has come to dominate the minds of many Australian educators (without a lot of reflection on the strengths and weaknesses of different forms of reviews).

So if we take the stock standard effect size compilation exercise as authoritative, let us also note the obvious constraints implied in that exercise. First, to do that work, all included previous studies have to have measured an outcome that is seen to be the same outcome. This implies that outcome is a) actually valuable and b) sufficiently consistent to be consistently measured. Since most research that fits this bill has already bought the ideology behind standardised measures of educational achievement, that’s its strongest footing. And it is good for that. These forms of analysis are also often not only about teaching, since the practices summarised often are much more than just teaching, but include pre-packaged curriculum as well (e.g. direct instruction research assumes previously set, given curriculum is being implemented).

Now just think about how many times you have seen someone say this or that practice has this or that effect size without also mentioning the very restricted nature of the studied ‘cause’ and measured outcome.

Simply ask ‘effect on what?’ and you have a clear idea of just how limited such meta-analyses actually are.

Randomised Control Trials

Also keep in mind what this form of research can actually tell us about new innovations: nothing directly. This last point applies doubly to the now ubiquitous calls for Randomised Control Trials (RCTs). By definition, RCTs cannot tell us what the effect of an innovation will be simply because that innovation has to already be in place to do an RCT at all. And to be firm on the methodology, we don’t need just one RCT per innovation, but several – so that meta-analyses can be conducted based on replication studies.

This isn’t an argument against meta-analyses and RCTs, but an appeal to be sensible about what we think we can learn from such necessary research endeavours.

Both of these forms of analysis are fundamentally committed to rigorously studying single cause-effect relationships, of the X leads to Y form, since the most rigorous empirical assessment of causality in this tradition is based on isolating the effects of everything other than the designed cause – the X of interest. This is how you specify just what needs to be randomised. Although RCTs in education are built from the tradition of educational psychology that sought to examine generalised claims about all of humanity where randomisation was needed at the individual student level, most reform applications of RCTs will randomise whatever unit of analysis best fits the intended reform. Common contemporary forms of this application will randomise teachers or schools in this or that innovation. The point of that randomisation is to find effects that are independent of the differences between whatever is randomised.

Research shows what has happened, not what will happen

The point of replications is to mitigate against known human flaws (biases, mistakes, etc) and to examine the effect of contexts. This is where our language about what research ‘says’ needs to be much more precise than what we typically see in news editorials and twitter. For example, when phonics advocates say ‘rigorous empirical research has shown phonics program X leads to effect Y’, don’t forget the background presumptions. What that research may have shown is that when phonics program X was implemented in a systemic study, the outcomes measured were Y. What this means is that the claims which can reasonably be drawn from such research are far more limited than zealous advocates hope. That research studied what happened, not what will happen.

Such research does NOT say anything about whether or not that program, when transplanted into a new context, will have the same effect. You have to be pretty sure the contexts are sufficiently similar to make that presumption. (Personally I am quite sceptical about crossing national boundaries with reforms, especially into Australia.)

Fidelity of implementation studies and instruments

More importantly, such studies cannot say anything about whether or not reform X can actually be implemented with sufficient ‘fidelity’ to expect the intended outcome. This reality is precisely why researchers seeking the ‘gold standard’ of research are now producing voluminous ‘fidelity of implementation’ studies and instruments. The Gates Foundation has funded many of these in the US, and I see intended publications from them all the time in my editorial role. Essentially fidelity of implementation measures attempt to estimate the degree to which the new program has been implemented as intended, often by analysing direct evidence of the implementation.

Each time I see one of these studies, it begs the question: ‘If the intent of the reform is to produce the qualities identified in the fidelity of implementation instruments, doesn’t the need of the fidelity of information suggest the reform isn’t readily implemented?’ And why not use the fidelity of implementation instrument itself if that’s what you really think has the effect? For a nice critique and re-framing of this issue see Tony Bryk’s Fidelity of Implementation: Is It the Right Concept?

The reality of ‘evidence-based’ policy

This is where the overall structure of the current push for evidence-based practices becomes most obvious. The fundamental paradox of current educational policy is that most of it is intended to centrally pre-determine what practices occur in local sites, what teachers do (and don’t do) – and yet the policy claims this will lead to the most advanced, innovative curriculum and teaching. It won’t. It can’t.

What it can do is provide a solid basis of knowledge for teachers to know and use in their own professional judgements about what is the best thing to do with their students on any given day. It might help convince schools and teachers to give up on historical practices and debates we are pretty confident won’t work. But what will work depends entirely on the innovation, professional judgement and, as Paul Brock once put it, nous of all educators.

 

James Ladwig is Associate Professor in the School of Education at the University of Newcastle and co-editor of the American Educational Research Journal.  He is internationally recognised for his expertise in educational research and school reform. 

Find James’ latest work in Limits to Evidence-Based Learning of Educational Science, in Hall, Quinn and Gollnick (Eds) The Wiley Handbook of Teaching and Learning published by Wiley-Blackwell, New York (in press).

James is on Twitter @jgladwig