evidence-based research

QandA:‘what works’ in ed with Bob Lingard, Jessica Gerrard, Adrian Piccoli, Rob Randall,Glenn Savage (chair)

See the full video here

Evidence, expertise and influence are increasingly contested in the making of Australian schooling policy.

More than ever, policy makers, researchers and practitioners are being asked to defend the evidence they use, justify why the voices of some experts are given preference over others, and be critically aware of the networks of influence that determine what counts as evidence and expertise.

The release of the ‘Gonski 2.0’ report raises a number of complex questions about the use of evidence in the development of schooling policies, and the forms of expertise and influence that are increasingly dominant in shaping conversations about the trajectory of schooling reform.

The report signals an ever-increasing presence of federal government influence in shaping schooling policy in Australia’s federal system. It also strongly reflects global shifts towards a “what works” reform narrative, which frames policy decisions as only justifiable in cases where there is evidence of demonstrable impact.

Proposals such as the creation of a ‘national research and evidence institute’ by the Labor party, and related proposals by the Australian Productivity Commission to create a national ‘education evidence base’, signal a potentially new era of policy making in Australia, in which decisions are guided by new national data infrastructures and hierarchies of evidence.

These developments raise serious questions about which kinds of evidence will count (and can be counted) in emerging evidence repositories, which experts (and forms of expertise) will be able to gain most traction, how developments might change the roles of federal, state and national agencies in contributing to evidence production, and the kinds of research knowledge that will (or will not) be able to gain tradition in national debates.

On November 6th, I hosted a Q&A Forum at the University of Sydney, co-sponsored by the AARE ‘Politics and Policy in Education’ Special Interest Group and the School and Teacher Education Policy Research Network at the University of Sydney.

It featured Adrian Piccoli (Director of the UNSW Gonski Institute for Education), Jessica Gerrard (senior lecturer in education, equity and politics at the University of Melbourne), Bob Lingard (Emeritus Professor at the University of Queensland and Professorial Research Fellow at the Australian Catholic University) and Rob Randall (CEO of the Australian Curriculum, Assessment and Reporting Authority).

What follows is an edited version of the event, featuring some key questions I posed to the panelists and some of their highlight responses.

See the full video here

Glenn: I want to start by considering the changing role and meaning of ‘evidence’ and how different forms of evidence shape conditions of possibility for education. What do you see as either the limits or possibilities of “what works” and “evidence-based” approaches to schooling reform?

Bob: It seems to me the ‘what works’ idea works with a sort of engineering conception of the relationship between evidence, research, policy making and professional practice in schools, and I think it also over simplifies research and evidence … I would prefer a relationship between evidence (and evidences of multiple kinds) to policy and to practice which was more of an enlightenment relationship rather than an engineering one … I think policy making and professional practice are really complex practices, and I think we can only ever have evidence-informed policy and evidence-informed professional practice, I don’t think we can have evidence-based … I think ‘what works’ has an almost inert clinical construction of practice. And I think there’s an arrogant certainty.

Adrian: The problem with the ‘what works’ movement is that it lends itself, particularly at a political level, to there being a ‘silver bullet’ to education improvement and the thing you launch the silver bullet on is a press release. I’ve always said the press release is the greatest threat to good education policy because it sounds good, in the lead up to an election, to say things like ‘independent public schools work’ so fund them, or it might be a phonics check, so let’s fund this because it works, but I think it lends itself to that kind of one-dimensional approach to education policy. But education reform is an art. What makes the painting great? It’s not the blue or the yellow or the red, it’s actually the right combination of those things. Education, at a political level, people can try to boil it down to things that are too simple.

Rob: I actually think the term [what works] is a useful term. If I go back to when I first started teaching, it’s a good question, ‘what works?’ Can you give me some leads? It’s not a matter of saying ‘this is it entirely’, but we’ve got to be careful of how the language enables us and not continue to diss it.

Glenn: NSW has created its Centre for Education Statistics and Evaluation, which describes itself as Australia’s first ‘data hub’ in education that will tell us “what works” in schools and ensure decisions are evidence-informed. On the Centre’s website, it tells us that NSW works with the concept of ‘an evidence hierarchy’. On top of the hierarchy is ‘the gold standard’, which includes either ‘meta analyses’ or ‘randomised controlled trials’. To me this begs a question: how might the role of researchers be shifting now ‘the best’ evidence is primarily based on large-scale and quantitative methods?

Jess: To me it’s a funny situation to be in when your bread and butter work is producing knowledge and evidence but you find yourself arguing against the framing and enthusiastic update of something like ‘evidence-based policy’. Particularly concerning is this hierarchical organisation of evidences where randomised controlled trials, statistical knowledge and other things like meta analyses are thought to be more certain, more robust, more concrete than other forms of research knowledge, such as qualitative in-depth interviews with school teachers about their experiences. The kind of knowledge that is produced through a statistical or very particular causal project becomes very narrow because it has to bracket out so many other contextual factors in order to produce ‘a certainty’ about social phenomena. We can’t rely on a medical model, where RCTs come from, for something like classroom practice, and you can see this in John Hattie’s very influential book Visible Learning. You just have to look at the Preface where he says that he bracketed out of his study any factor that was out of school. When you think about that it becomes unsurprising that the biggest finding is that teachers have the most impact, because you’ve bracketed out all these other things that clearly have an impact … With the relationship between politics and policy, I think it’s really interesting that, politically speaking, evidence-based policy becomes very popular around some reforms, yet not around other reforms, so school autonomy, great example, there’s no evidence to say that has a positive impact on student achievement but yet it gets rolled out, there’s no RCT on that, there’s no RCT on the funding of elite private schools, but yet we do these things. I think we can get into a trap of ‘policy-led evidence’ when political interests try to wrestle evidence for their own purposes.

Glenn: Let’s consider which ‘experts’ tend to exert the most influence in schooling. For example, a common claim is that some groups and individuals might get more of a say than others in steering debates about schooling. In other words, not everyone ‘gets a seat at the table’ when decisions are made – and if they do, voices are not always equally heard. A frequent criticism, for example, is that certain thinks tanks or lobby groups, or certain powerful and well-connected individuals, are often able to exert disproportionate power and influence. Would any of you like to comment on those dynamics and the claim that it might not be an even playing field of influence?

Bob: I think ‘think tank research’ is very different from the kind of research that’s done by academics in universities. The think tank usually has a political-ideological position, it usually takes the policy problem as given rather than thinking about the construction, I think it does research and writes reports which have specific audiences in mind, one the media and two the politicians. I remember once when I did a report for a government and the minister told me my problem was that I was ‘two-handed’. I’d say ‘on the one hand this might be the case, and on the other hand…’, but what he wanted was one-handed research advice, and I think in some ways the think tanks, that’s what they do.    

Glenn: Another important dimension here is that even when one’s voice is heard, often what ‘the public’ hears is far from the full story. And I think this is where we need to consider the role of the media and the 24-hour news cycle we now inhabit. For example, so much of what we hear about ‘the evidence’ driving schooling reform is filtered through the media; but this is invariably a selective version of the evidence. Do any of you have any thoughts or reflections on this complex dynamic between the media, experts, evidence and policy?

Adrian: Good education policy is really boring, right? It’s boring for the Daily Telegraph, it’s boring for the Sydney Morning Herald, it’s boring for the ABC, Channel 7, it’s boring. You talk curriculum, you talk assessment, you talk pedagogy, I mean when was the last time you saw the ‘pedagogy’ word in a news article? … what’s exciting is ‘you know what, here’s the silver bullet’ … and the public and media and the political process doesn’t have the patience for sound evidence-based education reform.

Rob: I think we’re at risk of underestimating the capability of the profession in terms of interpreting and engaging with this. I think we’re at risk of under-estimating the broader community.

Glenn: To me, it seems there’s something peculiar in terms of how expertise about education is constructed. For example, in the medical profession, many would see the expertise as lying with the practitioners themselves, the doctors, surgeons, and so on, who “possess” the expertise and are, therefore, the experts. If education mirrored this, then surely the experts would be the teachers and school leaders – and expertise would lie in their hands? But this often seems to be far from the way expertise is talked about in schooling. Instead, it seems the experts are often the economists, statisticians and global policy entrepreneurs who have little to do with schools. Why is it that the profession itself seems to so often be obscured in debates about expertise and schooling reform?

Jess: What we see now is because education and schooling is such a politically invested enterprise, with huge money attached to it, it’s never really been wrestled from the hands of government in terms of a professional body. So, a body like AITSL, for instance, which is meant to stand in as a kind of professional body, isn’t really representative of the profession, it doesn’t have those kinds of links to teachers themselves as the medical equivalent does. So, we’re in a curious state of affairs, I think you’re right Glenn, where who counts as having expertise are often not those who are within the street level, within the profession … We don’t have enough of an opportunity to hear from teachers themselves, to have unions and teachers as part of the public discussion, and when they are a part of the discussion they’re often positioned as being argumentative or troublesome as opposed to contributing to a robust public debate about education.

Bob: As we’ve moved into the kind of economies we have, the emphasis on schooling as human capital and so on, it is those away from schooling, the economists and others, who I think have formulated the big macro policy, rather than the knowledge of the profession.

Glenn: Up to this point we’ve been mainly talking about influence in terms of specific individuals, or groups, but also I think certain policies and forms of data also exert significant influence. I need only mention the term NAPLAN in front of a group of educators to inspire a flood of conversations (and often polarised opinion) about how this particular policy and its associated data influence their work. Is it a stretch to say that these policy technologies and data infrastructures now serve as political actors in their own right? Is there a risk when we start seeing data itself as a “source of truth” beyond the politics of its creation?

Jess: I think it’s absolutely seen in that way and I think that’s the problem with the hierarchy of knowledge or evidence. There’s a presumption that these so-called higher or more stable forms of knowledge can stand above the messiness of everyday life in schools or the complexity of social and cultural phenomena … there’s no way a number can convey the complexity, but because they seem so tantalisingly certain, they then have a life of themselves.

Adrian: NAPLAN is the King Kong of education policy because it started off relatively harmless on this little island and now it’s ripping down buildings and swatting away airplanes. I mean it’s just become this dominant thing in public discourse around education.    

Rob: Let’s not get naïve about how people are using it [NAPLAN]. People use the data in a whole range of ways. It’s not that it’s good on one side and bad on the other … now if we want to, we could take the data away, or we could actually say, ‘let’s have a more complete discussion about it’ … give parents the respect they deserve, I do not accept that there’s a whole bunch of parents out there choosing schools on the basis of NAPLAN results.

Glenn: To finish tonight, I want to pose a final ‘big sky’ question. The question is: If you had the power to change one thing about how the politics of evidence, expertise or influence work in Australian schooling policy, what would that be?

Bob: I would want to give emphasis to valuing teacher professional judgment within the use of data and have that as a central element rather than having the data driving.

Adrian: I would make it a legal requirement that systems and governments have to put the interests of child ahead of the interests of adults in education policy.

Jess: I think I’m going to give a sociologist’s answer, which is to say that I think what I would want to see is greater political commitment to acknowledging the actual power that is held in the current production of data and the strategic use of that. The discussion also needs to address the ethical and political dimensions of education and schooling beyond what data can tell us.

Rob: I would like to pursue the argument about increasing the respect and nature, the acknowledgment of, and the expectation of, the profession … I think there is a whole bunch of teachers out there who do a fantastic job … given their fundamental importance to the community, to the wellbeing of this country going forward I’d be upping the ante for the respect for and expectation of teachers.

See the full video here

Glenn C. Savage is a senior lecturer in education policy and sociology of education at the University of Western Australia. His research focuses on education policy, politics and governance at national and global levels, with a specific interest in federalism and national schooling reform. He currently holds an Australian Research Council ‘Discovery Early Career Research Award’ (DECRA) for his project titled ‘National schooling reform and the reshaping of Australian federalism’(2016-2019).

National Evidence Base for educational policy: a good idea or half-baked plan?

The recent call for a ‘national education evidence base’ by the Australian Government came as no surprise to Australian educators. The idea is that we need to gather evidence, nationally, on which education policies, programs and teaching practices work in order for governments to spend money wisely on education. There have long been arguments that Australia has been increasing its spending on education, particularly school education, without improving outcomes. We need to ‘get more bang for our buck’ as Education Minister, Simon Birmingham, famously told us or as the Australian Productivity Commission put it, we need to ‘improve education outcomes in a cost‑effective manner’.

I am one of the many educators who submitted a response to the Australian Productivity Commission’s national education evidence base proposal as set out in the draft report ‘National Education Evidence Base’. This blog post is based on my submission. Submissions are now closed and the Commission’s final report is due to be forwarded to the Australian Government in December 2016.

Inherent in the argument for a national education evidence base are criticisms of current educational research in Australia. As an educational researcher working in Australia this is the focus of my interest.

Here I will address five points raised in the report as follows: 1) the extent to which there is a need for better research to inform policy, 2) the nature of the needed research, 3) the capacity needed to produce that research, 4) who the audience of that research should be.

The need for better research to inform policy

As the report notes, there are several aspects of ongoing educational debate which could well be better advanced if a stronger evidence base existed. Examples of ongoing public educational debates are easily identified in Australia, most notably being the perpetual literacy wars. In a rational world, so the report seems to suggest, such debate could well become a thing of the past if only we had strong enough research to settle them. To me, this is a laudable goal.

However, such a standard position is naive in its assessment of why these debates are in fact on-going, and more naive in proposing recommendations that barely address any but the most simplistic reasons for the current situation. For example, whatever the current state of literacy research, the report itself demonstrates that the major source of these debates is not actually the research that government directed policy agents decide to use and interpret, but the simple fact there is NO systemic development of research informed policy analysis which is independent from government itself in Australia.

The introductory justification for this report, based loosely on a weak analysis of a small slice of available international comparative data demonstrates clearly how government directed research works in Australia.

As an editor of a top ranking educational research journal (the American Educational Research Journal) I can confidently say this particular analysis would not meet the standards of our highest ranked research journals because it is apparently partial, far from comprehensive and lacking in its own internal logic. It is a very good example of the very sort of research use away from which the report claims to want to move.

The nature of the needed research

The report makes much of the need for research which tests causal claims (a claim of the form “A was a cause of B”) placing high priority on experimental and quasi-experimental design. This portion of the report simply sums up arguments about the need for of the type of research in education promoted as ‘gold-standard’ more than a decade ago in the USA and UK. This argument is in part common-sense. However, it is naïve to make presumptions that such research will provide what policy makers in Australia today need to develop policy.

Comparisons are made between research in education and research in medicine for a variety of sensible reasons. However the implications of that comparison are vastly unrecognized in the report.

If Australia wishes to develop a more secure national evidence base for educational policy akin to that found in medicine, it must confront basic realities which most often are ignored and which are inadequately understood in this report:

a) the funding level of educational research is a minuscule fraction of that available to medicine,

b) the range and types of research that inform medical policy extend far beyond anything seen as ‘gold standard’ for education, including epidemiological studies, program evaluations and qualitative studies relevant to most medical practices, and

c) the degree to which educational practices are transportable across national and cultural differences is far less than that confronted by doctors whose basic unit of analysis is the human body.

Just at a technical level, while the need for randomised trials is identified in the report, there are clearly naïve assumptions about how that can actually be done with statistically validity that accounts for school level error estimations and the subsequent need for large samples of schools. (Individual level randomisation is insufficient.) Thus, the investment needed for truly solid evidence-based policy research in education is dramatically under-estimated in the report and most public discussions.

The capacity needed to produce that research

The report does well to identify a substantial shortage of Australia expertise available for this sort of research, and in the process demonstrates two dynamics which deserve much more public discussion and debate. First, there has been a trend to relying on disciplines outside of education for the technical expertise of analyzing currently available data. While this can be quite helpful at times, it is often fraught with the problems of invalid interpretations, simplistic (and practically unhelpful) policy recommendations which fail to take the history of the field and systems into account, and over-promising future effects of following the policy advice given.

Second, the report dramatically fails to acknowledge that the current shortage of research capacity is directly related to the manner and form of higher education funding available to do the work needed to develop future researchers. There is the additional obvious issue of a lack of secure career development in Australia for educational researchers. This, of course, is directly related to the previous point.

Audience of evidence-based policy research

While the report is clearly directed to developing solid evidence for policy-makers, it understates the need for that research to also provide sufficient reporting to a broader public for the policy making process. By necessity this involves the development of a much larger dissemination infrastructure than currently exists.

At the moment it would be very difficult for any journalist, much less any member of the general public, to find sound independent reports of larger bodies of (necessarily complicated and sometimes conflicting) research written for the purposes of informing the public. Almost all of the most independent research is either not translated from its scholarly home journals or not readily available due to restrictions in government contracts. What is available publicly and sometimes claims to be independent is almost always conducted with clear and obviously partial political and/or self- interest.

The reason this situation exists is simply that there is no independent body of educational research apart from that conducted by individual researchers in the research projects conducted with the independent funding of the ARC (and that is barely sufficient to its current disciplinary task).

Governance structure needed to produce research that is in the public interest

Finally I think perhaps the most important point to make about this report is that it claims to want to develop a national evidence base for informing policy, but the proposed governance of that evidence and research is entirely under the same current government strictures that currently limit what is done and said in the name of educational policy research in Australia. That is, however much there is a need to increase the research capacities of the various government departments and agencies which currently advise government, all of those are beholden to currently restrictive contracts, or conducted by individuals who are obligated to not publicly open current policy to public criticism.

By definition this means that public debate cannot be informed by independent research under the proposed governance for the development of the proposed national evidence base.

This is a growing trend in education that warrants substantial public debate. With the development of a single curriculum body and a single institute for professional standards, all with similarly restricted governance structures (just as was recently proposed in the NSW review of its Board of Studies), the degree to which alternative educational ideas, programs and institutions can be openly developed and tested is becoming more and more restricted.

Given the report’s desire to develop experimental testing, it is crucial to keep in mind that such research is predicated on the development of sound alternative educational practices which require the support of substantial and truly independent research.

 

ladwig_james

 

James Ladwig is Associate Professor in the School of Education at the University of Newcastle and co-editor of the American Educational Research Journal.  He is internationally recognised for his expertise in educational research and school reform.