National Evidence Base

What’s good ‘evidence-based’ practice for classrooms? We asked the teachers, here’s what they said

Calls for Australian schools and teachers to engage in ‘evidence-based practice’ have become increasingly loud over the past decade. Like ‘quality’, it’s hard to argue against evidence or the use of evidence in education, but also like ‘quality’, the devil’s in the detail: much depends on what we mean by ‘evidence’, what counts as ‘evidence’, and who gets to say what constitutes good ‘evidence’ of practice.

In this post we want to tell you about the conversations around what ‘evidence’ means when people talk about evidence-based practice in Australian schools, and importantly we want to tell you about our research into what teachers think good evidence is.

Often when people talk about ‘evidence’ in education they are talking about two different types of evidence. The first is the evidence of teacher professional judgment collected and used at classroom level involving things like student feedback and teacher self-assessment. The second is ‘objective’ or clinical evidence collected by tools like system-wide standardised tests.

Evidence of teacher professional judgment

This type of evidence is represented in the Australian Teacher Performance and Development Framework. For example, the framework suggests that good evidence of teachers’ practice is rich and complex, requiring that teachers possess and use sharp and well-honed professional judgement. It says: “an important part of effective professional practice is collecting evidence that provides the basis for ongoing feedback, reflection and further development. The complex work of teaching generates a rich and varied range of evidence that can inform meaningful evaluations of practice for both formative and summative purposes” (p.6). It goes on to suggest that sources of this kind of evidence might include observation, student feedback, parent feedback and teacher self-assessment and reflection, among others.

‘Objective’ evidence

The second discussion around evidence promotes good evidence of practice as something that should be ‘objective’ or clinical, something that should be independent of the ‘subjectivity’ of teacher judgement. We see this reflected in, for example, the much lauded “formative assessment tool” announced in the wake of Gonski 2.0 and to be developed by KPMG. The tool will track every child and ‘sound alarms’ if a child is slipping behind. It aims to remedy the purportedly unreliable nature of assessment of student learning that hasn’t been validated by standardising formative assessment practices. Indeed, the Gonski 2.0 report is very strongly imbued with the idea that evidence of learning that relies on teacher professional judgement is in need of being overridden by more objective measures.  

But what do teachers themselves think good evidence is?

We’ve been talking to teachers about their understanding and use of evidence, as part of our Teachers, Educational Data and Evidence-informed Practice project. We began with 21 interviews with teachers and school leaders in mid-2018, and have recently run an online questionnaire that gained over 500 responses from primary and secondary teachers around Australia.

Our research shows that teachers clearly think deeply about what constitutes good evidence of their practice. For many of them, the fact that students are engaged in their learning provides the best evidence of good teaching. Teachers were very expansive and articulate about what the indicators of such engagement are:

I know I’m teaching well based on how well my students synthesise their knowledge and readily apply it in different contexts. Also by the quality of their questions they ask me and each other in class. They come prepared to debate. Also when they help each other and are not afraid to take risks. When they send me essays and ideas they might be thinking about. Essentially I know I’m teaching well because the relationship is positive and students can articulate what they’re doing, why they’re doing it and can also show they understand, by teaching their peers. (Secondary teacher, NSW)

Furthermore, teachers know that ‘assessment’ is not something that stands independent of them – that the very act of using evidence to inform practice involves judgement. Their role in knowing their students, knowing about learning, and assessing and supporting their students to increase their knowledge and understanding is crucial. Balanced and thoughtful assessment of student learning relies on knowledge of how to assess, and of what constitutes good evidence.

Good evidence is gathering a range of pieces of student work to use to arrive at a balanced assessment. I believe I am teaching well when the student data shows learning and good outcomes. (Primary teacher, SA)

Gathering good evidence of teaching and learning is an iterative process, that is it is a process of evaluating and adjusting that teachers constantly repeat and build on. It is part of the very fabric of teaching, and something that good teachers do every day in order to make decisions about what needs to happen next.

I use strategies like exit cards sometimes to find out about content knowledge and also to hear questions from students about what they still need to know/understand. I use questioning strategies in class and make judgements based on the answers or further questions of my students. (Secondary teacher, Vic)

I get immediate feedback each class from my students.  I know them well and can see when they are engaged and learning and when I’m having very little effect. (Secondary teacher, Qld)

Where does NAPLAN sit as ‘evidence’ for teachers?

Teachers are not afraid to reflect on and gather evidence of their practice, but too often, calls for ‘evidence-based practice’ in education ignore the evidence that really counts. Narrow definitions of evidence where it is linked to external testing are highly problematic. While external testing is part of the puzzle, it can be harmful to use that evidence for purposes beyond what it can really tell us – as one of us has argued before. And the teachers in our study well understood this. For them, NAPLAN data, for instance, was bottom of the list when it comes to evidence of their practice, as seen in the chart below.

This doesn’t mean they discount the potentially, perhaps partially, informative value in such testing (after all, about 72% think it’s at least a ‘somewhat’ valid and reliable form of evidence), but it does mean that, in their view, the best evidence is that which is tied to the day to day work that goes on in their classrooms.

Evidence rated from not useful to extremely useful by teachers in our survey

Teachers value a range of sources of evidence of their practice, placing particular emphasis on that which has a front row seat to their work, their own reflections and observations, and those of the students they teach. Perhaps this is because they need this constant stream of information to enable them to make the thousands of decisions they make about their practice in the course of a day – or an hour, or a minute. The ‘complex work of teaching’ does not need a formalised, ‘objective’ tool to help it along. Instead, we need to properly recognise the complexity of teaching, and the inherent, interwoven necessity of teacher judgement that makes it what it is.

What do teachers want?

Teachers were very clear about what they didn’t want.

Teachers are time poor. We are tired. It sounds good to do all this extra stuff but unless we are given more time it will just be another layer of pressure. (Secondary teacher, NSW)

Teachers believe in and want to rely on useful data but they don’t have the time to do it well. (Primary teacher, NSW)

It must be practical, helpful and not EXTRA. (Primary teacher, Vic)

They don’t want “extra stuff” to do.

They want relevant, high quality and localised professional learning. They want to better understand and work with a range of forms of useful data and research. They particularly find in-school teacher research with support useful, along with access to curated readings with classroom value. Social media also features as a useful tool for teachers.

Our research is ongoing. Our next task is to work further with teachers to develop and refine resources to support them in these endeavours.

We believe teachers should be heard more clearly in the conversations about evidence; policy makers and other decision-makers need to listen to teachers. The type of evidence that teachers want and can use should be basic to any plan around ‘evidence-based’ or ‘evidence-informed’ teaching in Australian schools.

Dr Nicole Mockler is Associate Professor of Education, at the Sydney School of  Education and Social Work at the University of Sydney. She is a former teacher and  school leader, and her research and writing primarily focuses on education policy and  politics and teacher professional identity and learning. Her recent scholarly books  include Questioning the Language of improvement and reform in education: Reclaiming  meaning (Routledge, 2018) and Engaging with student voice in research, education and  community: Beyond legitimation and guardianship (Springer 2015), both co-authored  with Susan Groundwater-Smith. Nicole is currently Editor in Chief of The Australian  Educational Researcher.Nicole is on Twitter @nicolemockler

Dr Meghan Stacey is a lecturer in the sociology of education and education policy in the School of Education at the University of New South Wales. Taking a particular interest in teachers, her research considers how teachers’ work is framed by policy, as well as the effects of such policy for those who work with, within and against it. Meghan completed her PhD with the University of Sydney in 2018. Meghan is on Twitter@meghanrstacey

National Evidence Base for educational policy: a good idea or half-baked plan?

The recent call for a ‘national education evidence base’ by the Australian Government came as no surprise to Australian educators. The idea is that we need to gather evidence, nationally, on which education policies, programs and teaching practices work in order for governments to spend money wisely on education. There have long been arguments that Australia has been increasing its spending on education, particularly school education, without improving outcomes. We need to ‘get more bang for our buck’ as Education Minister, Simon Birmingham, famously told us or as the Australian Productivity Commission put it, we need to ‘improve education outcomes in a cost‑effective manner’.

I am one of the many educators who submitted a response to the Australian Productivity Commission’s national education evidence base proposal as set out in the draft report ‘National Education Evidence Base’. This blog post is based on my submission. Submissions are now closed and the Commission’s final report is due to be forwarded to the Australian Government in December 2016.

Inherent in the argument for a national education evidence base are criticisms of current educational research in Australia. As an educational researcher working in Australia this is the focus of my interest.

Here I will address five points raised in the report as follows: 1) the extent to which there is a need for better research to inform policy, 2) the nature of the needed research, 3) the capacity needed to produce that research, 4) who the audience of that research should be.

The need for better research to inform policy

As the report notes, there are several aspects of ongoing educational debate which could well be better advanced if a stronger evidence base existed. Examples of ongoing public educational debates are easily identified in Australia, most notably being the perpetual literacy wars. In a rational world, so the report seems to suggest, such debate could well become a thing of the past if only we had strong enough research to settle them. To me, this is a laudable goal.

However, such a standard position is naive in its assessment of why these debates are in fact on-going, and more naive in proposing recommendations that barely address any but the most simplistic reasons for the current situation. For example, whatever the current state of literacy research, the report itself demonstrates that the major source of these debates is not actually the research that government directed policy agents decide to use and interpret, but the simple fact there is NO systemic development of research informed policy analysis which is independent from government itself in Australia.

The introductory justification for this report, based loosely on a weak analysis of a small slice of available international comparative data demonstrates clearly how government directed research works in Australia.

As an editor of a top ranking educational research journal (the American Educational Research Journal) I can confidently say this particular analysis would not meet the standards of our highest ranked research journals because it is apparently partial, far from comprehensive and lacking in its own internal logic. It is a very good example of the very sort of research use away from which the report claims to want to move.

The nature of the needed research

The report makes much of the need for research which tests causal claims (a claim of the form “A was a cause of B”) placing high priority on experimental and quasi-experimental design. This portion of the report simply sums up arguments about the need for of the type of research in education promoted as ‘gold-standard’ more than a decade ago in the USA and UK. This argument is in part common-sense. However, it is naïve to make presumptions that such research will provide what policy makers in Australia today need to develop policy.

Comparisons are made between research in education and research in medicine for a variety of sensible reasons. However the implications of that comparison are vastly unrecognized in the report.

If Australia wishes to develop a more secure national evidence base for educational policy akin to that found in medicine, it must confront basic realities which most often are ignored and which are inadequately understood in this report:

a) the funding level of educational research is a minuscule fraction of that available to medicine,

b) the range and types of research that inform medical policy extend far beyond anything seen as ‘gold standard’ for education, including epidemiological studies, program evaluations and qualitative studies relevant to most medical practices, and

c) the degree to which educational practices are transportable across national and cultural differences is far less than that confronted by doctors whose basic unit of analysis is the human body.

Just at a technical level, while the need for randomised trials is identified in the report, there are clearly naïve assumptions about how that can actually be done with statistically validity that accounts for school level error estimations and the subsequent need for large samples of schools. (Individual level randomisation is insufficient.) Thus, the investment needed for truly solid evidence-based policy research in education is dramatically under-estimated in the report and most public discussions.

The capacity needed to produce that research

The report does well to identify a substantial shortage of Australia expertise available for this sort of research, and in the process demonstrates two dynamics which deserve much more public discussion and debate. First, there has been a trend to relying on disciplines outside of education for the technical expertise of analyzing currently available data. While this can be quite helpful at times, it is often fraught with the problems of invalid interpretations, simplistic (and practically unhelpful) policy recommendations which fail to take the history of the field and systems into account, and over-promising future effects of following the policy advice given.

Second, the report dramatically fails to acknowledge that the current shortage of research capacity is directly related to the manner and form of higher education funding available to do the work needed to develop future researchers. There is the additional obvious issue of a lack of secure career development in Australia for educational researchers. This, of course, is directly related to the previous point.

Audience of evidence-based policy research

While the report is clearly directed to developing solid evidence for policy-makers, it understates the need for that research to also provide sufficient reporting to a broader public for the policy making process. By necessity this involves the development of a much larger dissemination infrastructure than currently exists.

At the moment it would be very difficult for any journalist, much less any member of the general public, to find sound independent reports of larger bodies of (necessarily complicated and sometimes conflicting) research written for the purposes of informing the public. Almost all of the most independent research is either not translated from its scholarly home journals or not readily available due to restrictions in government contracts. What is available publicly and sometimes claims to be independent is almost always conducted with clear and obviously partial political and/or self- interest.

The reason this situation exists is simply that there is no independent body of educational research apart from that conducted by individual researchers in the research projects conducted with the independent funding of the ARC (and that is barely sufficient to its current disciplinary task).

Governance structure needed to produce research that is in the public interest

Finally I think perhaps the most important point to make about this report is that it claims to want to develop a national evidence base for informing policy, but the proposed governance of that evidence and research is entirely under the same current government strictures that currently limit what is done and said in the name of educational policy research in Australia. That is, however much there is a need to increase the research capacities of the various government departments and agencies which currently advise government, all of those are beholden to currently restrictive contracts, or conducted by individuals who are obligated to not publicly open current policy to public criticism.

By definition this means that public debate cannot be informed by independent research under the proposed governance for the development of the proposed national evidence base.

This is a growing trend in education that warrants substantial public debate. With the development of a single curriculum body and a single institute for professional standards, all with similarly restricted governance structures (just as was recently proposed in the NSW review of its Board of Studies), the degree to which alternative educational ideas, programs and institutions can be openly developed and tested is becoming more and more restricted.

Given the report’s desire to develop experimental testing, it is crucial to keep in mind that such research is predicated on the development of sound alternative educational practices which require the support of substantial and truly independent research.

 

ladwig_james

 

James Ladwig is Associate Professor in the School of Education at the University of Newcastle and co-editor of the American Educational Research Journal.  He is internationally recognised for his expertise in educational research and school reform.