evidence-based practices

The problem with using scientific evidence in education (why teachers should stop trying to be more like doctors)

For teachers to be like doctors, and base practice on more “scientific” research, might seem like a good idea. But medical doctors are already questioning the narrow reliance in medicine on randomised controlled trials that Australia seems intent on implementing in education.

In randomised controlled trials of new drugs, researchers get two groups of comparable people with a specific problem and give one group the new drug and the other group the old drug or a placebo.  No one knows who gets what. Not the doctor, not the patient and not the person assessing the outcomes. Then statistical analysis of the results informs guidelines for clinical practice. 

In education, though, students are very different from each other. Unlike those administering placebos and real drugs in a medical trial, teachers know if they are delivering an intervention. Students know they are getting one thing or another. The person assessing the situation knows an intervention has taken place. Constructing a reliable educational randomised controlled trial is highly problematic and open to bias.

As a doctor and teacher thinking, writing and researching together we believe that a more honest understanding of the ambivalences and failures of evidence-based medicine is essential for education.

Before Australia decides teachers need to be like doctors, we want to tell you what is happening and give you some reasons why evidence based medicine itself is said to be in crisis

1. Randomised controlled trials are just one kind of evidence

Medicine now recognises a much broader evidence base than just randomised controlled trials. Other kinds of medical evidence include: practical “on-the-job” expertise; professional knowledge; insights provided by other research such as case studies; intuition; wisdom gained from listening to patient histories and discussions with patients that allow for shared decision-making or negotiation.

Privileging randomised controlled trials allows them to become sticks that beat practitioners into uniformity of practice, no matter what their patients want or need. Such practitioners become “cookbook” doctors or, in education, potentially, “cookbook” teachers. The best and most recent forms of evidence based medicine value a broad range of evidence and do not create hierarchies of evidence. Education policy needs to consider this carefully and treat all forms of evidence equally.

2. Medicine can be used as a bully

Teaching is a feminised profession, with a much lower status than medicine. It is easy for science to exert a masculinist authority over teachers, who are required to be ever more scientific to seem professional.  They are called on to be phallic teachers, using data, tools, tests, rubrics, standards, benchmarks, probes and scientific trials, rather than “soft” skills of listening, empathising, reflecting and sharing.

A Western scientific evidence-base for practice similarly does not value Indigenous knowledges or philosophies of learning. Externally mandated guidelines also negate the concepts of student voice and negotiated curriculum. While confident doctors know the randomised controlled trial-based statistics and effect sizes need to be read with scepticism, this is not so easy for many teachers. If randomised controlled trial-based guidelines are to rule teaching, teachers will also potentially be monitored for compliance with guidelines they may not fully understand or accept, and which may potentially harm their students.

3. Evidence based medicine is about populations, not people

While medical randomised controlled trials save lives by demonstrating the broad effects of interventions, they make individuals and their needs harder to perceive and respect.  Randomised controlled trial-based guidelines can mean that diverse people are forced to conform to simplistic ideals. Rather than starting with the patient, the doctor starts with the rule. Is this what we want for teaching? When medical guidelines are applied in rigid ways, patients can be harmed.

Trials cannot be done on every single kind of person and so inevitably, many individuals are forced to have treatments that will not benefit them at all, or that are at odds with their wishes and beliefs. Educators need to ensure that teachers, not bureaucrats or researchers, remain the authority in their classrooms.

5. Scientific evidence gives rise to gurus

Evidence-based practice can give rise to the cult of the guru. Researchers such as John Hattie, and their trademarked programs like “Visible Learning” based on apparently infallible science, can rapidly colonise and dominate education. Yet their medicalised glamour disguises the reality that there is no universal and enduring formula for “what works”.

In 2009, in his book Visible learning: A synthesis of over 800 meta-analyses relating to achievement Hattie advised that, based on evidence, all healthy people should take aspirin to prevent heart attacks. Yet also in 2009, new medical evidence “proved” that the harms in healthy people taking aspirin outweigh the benefits.

In 2009 Hattie said class size does not matter. In 2014, further research found that reducing class size has an important and lasting impact, especially for students from disadvantaged backgrounds.

While medical-style guidelines may seem to have come from God, such guidelines, even in medicine are often multiple and contradictory. The “cookbook” teacher will always be chasing the latest guideline, disempowered by top-down interference in the classroom.

In medicine, over five years, fifty percent of guideline recommendations are overturned by new evidence. A comparable situation in education would create unimaginable turmoil for teachers.

6. Evidence-based practice risks conflicts of interest

Educational publishers and platforms are very interested in “scientific” evidence.  If a researcher can “prove” an intervention works and should be applied to all, this means big dollars. Randomised controlled trials in medicine routinely produce outcomes that are to the benefit of industry. Only certain trials get funded. Much unfavourable research is never published. Drug and medical companies set agendas rather than responding to patient needs, in what has been described as a guideline “factory”.

Imagine how this will play out in education. Do we want what happens in classrooms to be dictated by profit driven companies, or student-centred teachers?

What needs to happen?

We call for an urgent halt to the imposition of ‘evidence-based’ education on Australian teachers, until there a fuller understanding of the benefits and costs of narrow, statistical evidence-based practice. In particular, education needs protection from the likely exploitation of evidence-based guidelines by industries with vested interests.

Rather than removing teacher agency and enforcing subordination to gurus and data-based cults, education needs to embrace a wide range of evidence and reinstate the teacher as the expert who decides whether or not a guideline applies to each student.

Pretending teachers are doctors, without acknowledging the risks and costs of this, leaves students consigned to boring, standardised and ineffective cookbook teaching. Do we want teachers to start with a recipe, or the person in front of them?

Here is our paper for those who want more: A broken paradigm? What education needs to learn from evidence-based medicine by Lucinda McKnight and Andy Morgan

Dr Lucinda McKnight is a pre-service teacher educator and senior lecturer in pedagogy and curriculum at Deakin University, Melbourne. She is also a qualified health and fitness professional. She is interested in the use of scientific and medical metaphor in education. Lucinda can be found on Twitter@LucindaMcKnigh8

Dr Andy Morgan is a British Australian medical doctor and senior lecturer in general practice at Monash University, Melbourne. He has an MA in Clinical Education from the Institute of Education, UCL, London. His research interests are in consultation skills and patient-centred care. He is a former fellow of the Royal College of General Practitioners, and current fellow of the Australian Royal College of General Practitioners.



What’s good ‘evidence-based’ practice for classrooms? We asked the teachers, here’s what they said

Calls for Australian schools and teachers to engage in ‘evidence-based practice’ have become increasingly loud over the past decade. Like ‘quality’, it’s hard to argue against evidence or the use of evidence in education, but also like ‘quality’, the devil’s in the detail: much depends on what we mean by ‘evidence’, what counts as ‘evidence’, and who gets to say what constitutes good ‘evidence’ of practice.

In this post we want to tell you about the conversations around what ‘evidence’ means when people talk about evidence-based practice in Australian schools, and importantly we want to tell you about our research into what teachers think good evidence is.

Often when people talk about ‘evidence’ in education they are talking about two different types of evidence. The first is the evidence of teacher professional judgment collected and used at classroom level involving things like student feedback and teacher self-assessment. The second is ‘objective’ or clinical evidence collected by tools like system-wide standardised tests.

Evidence of teacher professional judgment

This type of evidence is represented in the Australian Teacher Performance and Development Framework. For example, the framework suggests that good evidence of teachers’ practice is rich and complex, requiring that teachers possess and use sharp and well-honed professional judgement. It says: “an important part of effective professional practice is collecting evidence that provides the basis for ongoing feedback, reflection and further development. The complex work of teaching generates a rich and varied range of evidence that can inform meaningful evaluations of practice for both formative and summative purposes” (p.6). It goes on to suggest that sources of this kind of evidence might include observation, student feedback, parent feedback and teacher self-assessment and reflection, among others.

‘Objective’ evidence

The second discussion around evidence promotes good evidence of practice as something that should be ‘objective’ or clinical, something that should be independent of the ‘subjectivity’ of teacher judgement. We see this reflected in, for example, the much lauded “formative assessment tool” announced in the wake of Gonski 2.0 and to be developed by KPMG. The tool will track every child and ‘sound alarms’ if a child is slipping behind. It aims to remedy the purportedly unreliable nature of assessment of student learning that hasn’t been validated by standardising formative assessment practices. Indeed, the Gonski 2.0 report is very strongly imbued with the idea that evidence of learning that relies on teacher professional judgement is in need of being overridden by more objective measures.  

But what do teachers themselves think good evidence is?

We’ve been talking to teachers about their understanding and use of evidence, as part of our Teachers, Educational Data and Evidence-informed Practice project. We began with 21 interviews with teachers and school leaders in mid-2018, and have recently run an online questionnaire that gained over 500 responses from primary and secondary teachers around Australia.

Our research shows that teachers clearly think deeply about what constitutes good evidence of their practice. For many of them, the fact that students are engaged in their learning provides the best evidence of good teaching. Teachers were very expansive and articulate about what the indicators of such engagement are:

I know I’m teaching well based on how well my students synthesise their knowledge and readily apply it in different contexts. Also by the quality of their questions they ask me and each other in class. They come prepared to debate. Also when they help each other and are not afraid to take risks. When they send me essays and ideas they might be thinking about. Essentially I know I’m teaching well because the relationship is positive and students can articulate what they’re doing, why they’re doing it and can also show they understand, by teaching their peers. (Secondary teacher, NSW)

Furthermore, teachers know that ‘assessment’ is not something that stands independent of them – that the very act of using evidence to inform practice involves judgement. Their role in knowing their students, knowing about learning, and assessing and supporting their students to increase their knowledge and understanding is crucial. Balanced and thoughtful assessment of student learning relies on knowledge of how to assess, and of what constitutes good evidence.

Good evidence is gathering a range of pieces of student work to use to arrive at a balanced assessment. I believe I am teaching well when the student data shows learning and good outcomes. (Primary teacher, SA)

Gathering good evidence of teaching and learning is an iterative process, that is it is a process of evaluating and adjusting that teachers constantly repeat and build on. It is part of the very fabric of teaching, and something that good teachers do every day in order to make decisions about what needs to happen next.

I use strategies like exit cards sometimes to find out about content knowledge and also to hear questions from students about what they still need to know/understand. I use questioning strategies in class and make judgements based on the answers or further questions of my students. (Secondary teacher, Vic)

I get immediate feedback each class from my students.  I know them well and can see when they are engaged and learning and when I’m having very little effect. (Secondary teacher, Qld)

Where does NAPLAN sit as ‘evidence’ for teachers?

Teachers are not afraid to reflect on and gather evidence of their practice, but too often, calls for ‘evidence-based practice’ in education ignore the evidence that really counts. Narrow definitions of evidence where it is linked to external testing are highly problematic. While external testing is part of the puzzle, it can be harmful to use that evidence for purposes beyond what it can really tell us – as one of us has argued before. And the teachers in our study well understood this. For them, NAPLAN data, for instance, was bottom of the list when it comes to evidence of their practice, as seen in the chart below.

This doesn’t mean they discount the potentially, perhaps partially, informative value in such testing (after all, about 72% think it’s at least a ‘somewhat’ valid and reliable form of evidence), but it does mean that, in their view, the best evidence is that which is tied to the day to day work that goes on in their classrooms.

Evidence rated from not useful to extremely useful by teachers in our survey

Teachers value a range of sources of evidence of their practice, placing particular emphasis on that which has a front row seat to their work, their own reflections and observations, and those of the students they teach. Perhaps this is because they need this constant stream of information to enable them to make the thousands of decisions they make about their practice in the course of a day – or an hour, or a minute. The ‘complex work of teaching’ does not need a formalised, ‘objective’ tool to help it along. Instead, we need to properly recognise the complexity of teaching, and the inherent, interwoven necessity of teacher judgement that makes it what it is.

What do teachers want?

Teachers were very clear about what they didn’t want.

Teachers are time poor. We are tired. It sounds good to do all this extra stuff but unless we are given more time it will just be another layer of pressure. (Secondary teacher, NSW)

Teachers believe in and want to rely on useful data but they don’t have the time to do it well. (Primary teacher, NSW)

It must be practical, helpful and not EXTRA. (Primary teacher, Vic)

They don’t want “extra stuff” to do.

They want relevant, high quality and localised professional learning. They want to better understand and work with a range of forms of useful data and research. They particularly find in-school teacher research with support useful, along with access to curated readings with classroom value. Social media also features as a useful tool for teachers.

Our research is ongoing. Our next task is to work further with teachers to develop and refine resources to support them in these endeavours.

We believe teachers should be heard more clearly in the conversations about evidence; policy makers and other decision-makers need to listen to teachers. The type of evidence that teachers want and can use should be basic to any plan around ‘evidence-based’ or ‘evidence-informed’ teaching in Australian schools.

Dr Nicole Mockler is Associate Professor of Education, at the Sydney School of  Education and Social Work at the University of Sydney. She is a former teacher and  school leader, and her research and writing primarily focuses on education policy and  politics and teacher professional identity and learning. Her recent scholarly books  include Questioning the Language of improvement and reform in education: Reclaiming  meaning (Routledge, 2018) and Engaging with student voice in research, education and  community: Beyond legitimation and guardianship (Springer 2015), both co-authored  with Susan Groundwater-Smith. Nicole is currently Editor in Chief of The Australian  Educational Researcher.Nicole is on Twitter @nicolemockler

Dr Meghan Stacey is a lecturer in the sociology of education and education policy in the School of Education at the University of New South Wales. Taking a particular interest in teachers, her research considers how teachers’ work is framed by policy, as well as the effects of such policy for those who work with, within and against it. Meghan completed her PhD with the University of Sydney in 2018. Meghan is on Twitter@meghanrstacey