James Ladwig

National Evidence Base for educational policy: a good idea or half-baked plan?

The recent call for a ‘national education evidence base’ by the Australian Government came as no surprise to Australian educators. The idea is that we need to gather evidence, nationally, on which education policies, programs and teaching practices work in order for governments to spend money wisely on education. There have long been arguments that Australia has been increasing its spending on education, particularly school education, without improving outcomes. We need to ‘get more bang for our buck’ as Education Minister, Simon Birmingham, famously told us or as the Australian Productivity Commission put it, we need to ‘improve education outcomes in a cost‑effective manner’.

I am one of the many educators who submitted a response to the Australian Productivity Commission’s national education evidence base proposal as set out in the draft report ‘National Education Evidence Base’. This blog post is based on my submission. Submissions are now closed and the Commission’s final report is due to be forwarded to the Australian Government in December 2016.

Inherent in the argument for a national education evidence base are criticisms of current educational research in Australia. As an educational researcher working in Australia this is the focus of my interest.

Here I will address five points raised in the report as follows: 1) the extent to which there is a need for better research to inform policy, 2) the nature of the needed research, 3) the capacity needed to produce that research, 4) who the audience of that research should be.

The need for better research to inform policy

As the report notes, there are several aspects of ongoing educational debate which could well be better advanced if a stronger evidence base existed. Examples of ongoing public educational debates are easily identified in Australia, most notably being the perpetual literacy wars. In a rational world, so the report seems to suggest, such debate could well become a thing of the past if only we had strong enough research to settle them. To me, this is a laudable goal.

However, such a standard position is naive in its assessment of why these debates are in fact on-going, and more naive in proposing recommendations that barely address any but the most simplistic reasons for the current situation. For example, whatever the current state of literacy research, the report itself demonstrates that the major source of these debates is not actually the research that government directed policy agents decide to use and interpret, but the simple fact there is NO systemic development of research informed policy analysis which is independent from government itself in Australia.

The introductory justification for this report, based loosely on a weak analysis of a small slice of available international comparative data demonstrates clearly how government directed research works in Australia.

As an editor of a top ranking educational research journal (the American Educational Research Journal) I can confidently say this particular analysis would not meet the standards of our highest ranked research journals because it is apparently partial, far from comprehensive and lacking in its own internal logic. It is a very good example of the very sort of research use away from which the report claims to want to move.

The nature of the needed research

The report makes much of the need for research which tests causal claims (a claim of the form “A was a cause of B”) placing high priority on experimental and quasi-experimental design. This portion of the report simply sums up arguments about the need for of the type of research in education promoted as ‘gold-standard’ more than a decade ago in the USA and UK. This argument is in part common-sense. However, it is naïve to make presumptions that such research will provide what policy makers in Australia today need to develop policy.

Comparisons are made between research in education and research in medicine for a variety of sensible reasons. However the implications of that comparison are vastly unrecognized in the report.

If Australia wishes to develop a more secure national evidence base for educational policy akin to that found in medicine, it must confront basic realities which most often are ignored and which are inadequately understood in this report:

a) the funding level of educational research is a minuscule fraction of that available to medicine,

b) the range and types of research that inform medical policy extend far beyond anything seen as ‘gold standard’ for education, including epidemiological studies, program evaluations and qualitative studies relevant to most medical practices, and

c) the degree to which educational practices are transportable across national and cultural differences is far less than that confronted by doctors whose basic unit of analysis is the human body.

Just at a technical level, while the need for randomised trials is identified in the report, there are clearly naïve assumptions about how that can actually be done with statistically validity that accounts for school level error estimations and the subsequent need for large samples of schools. (Individual level randomisation is insufficient.) Thus, the investment needed for truly solid evidence-based policy research in education is dramatically under-estimated in the report and most public discussions.

The capacity needed to produce that research

The report does well to identify a substantial shortage of Australia expertise available for this sort of research, and in the process demonstrates two dynamics which deserve much more public discussion and debate. First, there has been a trend to relying on disciplines outside of education for the technical expertise of analyzing currently available data. While this can be quite helpful at times, it is often fraught with the problems of invalid interpretations, simplistic (and practically unhelpful) policy recommendations which fail to take the history of the field and systems into account, and over-promising future effects of following the policy advice given.

Second, the report dramatically fails to acknowledge that the current shortage of research capacity is directly related to the manner and form of higher education funding available to do the work needed to develop future researchers. There is the additional obvious issue of a lack of secure career development in Australia for educational researchers. This, of course, is directly related to the previous point.

Audience of evidence-based policy research

While the report is clearly directed to developing solid evidence for policy-makers, it understates the need for that research to also provide sufficient reporting to a broader public for the policy making process. By necessity this involves the development of a much larger dissemination infrastructure than currently exists.

At the moment it would be very difficult for any journalist, much less any member of the general public, to find sound independent reports of larger bodies of (necessarily complicated and sometimes conflicting) research written for the purposes of informing the public. Almost all of the most independent research is either not translated from its scholarly home journals or not readily available due to restrictions in government contracts. What is available publicly and sometimes claims to be independent is almost always conducted with clear and obviously partial political and/or self- interest.

The reason this situation exists is simply that there is no independent body of educational research apart from that conducted by individual researchers in the research projects conducted with the independent funding of the ARC (and that is barely sufficient to its current disciplinary task).

Governance structure needed to produce research that is in the public interest

Finally I think perhaps the most important point to make about this report is that it claims to want to develop a national evidence base for informing policy, but the proposed governance of that evidence and research is entirely under the same current government strictures that currently limit what is done and said in the name of educational policy research in Australia. That is, however much there is a need to increase the research capacities of the various government departments and agencies which currently advise government, all of those are beholden to currently restrictive contracts, or conducted by individuals who are obligated to not publicly open current policy to public criticism.

By definition this means that public debate cannot be informed by independent research under the proposed governance for the development of the proposed national evidence base.

This is a growing trend in education that warrants substantial public debate. With the development of a single curriculum body and a single institute for professional standards, all with similarly restricted governance structures (just as was recently proposed in the NSW review of its Board of Studies), the degree to which alternative educational ideas, programs and institutions can be openly developed and tested is becoming more and more restricted.

Given the report’s desire to develop experimental testing, it is crucial to keep in mind that such research is predicated on the development of sound alternative educational practices which require the support of substantial and truly independent research.




James Ladwig is Associate Professor in the School of Education at the University of Newcastle and co-editor of the American Educational Research Journal.  He is internationally recognised for his expertise in educational research and school reform.

Myth buster: improving school attendance does not improve student outcomes

Does improved student attendance lead to improved student achievement?

Join prime ministers, premiers and education ministers from all sides of politics if you believe it does. They regularly tell us about the need to “improve” or “increase” attendance in order to improve achievement.

We recently had unprecedented access to state government data on individual school and student attendance and achievement in over 120 schools  (as part of a major 2009-2013 study of the reform and leadership of schools serving Indigenous students and communities) so we decided to test the widely held assumption.

What we found is both surprising and challenging.

The overall claim that increased attendance is linked with improved achievement seems like common sense. It stands to reason that if a student attends more, s/he is more likely to perform better on annually administered standardised tests. The inverse also seems intuitive and common sensical: that if an individual student doesn’t attend, s/he is less likely to achieve well on these conventional measures.

But sometimes what appears to make sense about an individual student may not factually hold up when we look at the patterns across a larger school or system.

In our research we were studying background patterns on attendance and achievement using very conventional empirical statistical analysis.  What we found in first up was that, whatever else we may hope, school level attendance rates generally don’t change all that much.

Despite officially supported policies and high profile school and regional foci, schools making big improvement in attendance rates are the exception, and are very rare.

Further, we found, the vast majority (around 76%) of the level of school attendance empirically is related to geographic remoteness, the percentage of Indigenous kids, and levels of socio-economic marginalisation. These are matters that for the most part are beyond the purview of schools and systems to change. Most importantly and most surprisingly, we found there is no relationship between school attendance and school level NAPLAN results. This is the case whether you are looking at overall levels and rates of change or the achievement of specific groups of Indigenous and non-Indigenous students.

The particular policy story that improved attendance will improve or bootstrap conventional achievement has no basis in fact at a school level. The policy making and funding that goes into lifting attendance rates of whole schools or systems assumes erroneously that improvements in achievement by individual students will logically follow.

The bottom line is you can’t simply generalise an individual story and apply it to schools. The data shows this.

Further, and this is important in current reform debates, we observed that the very few schools with high percentages of Indigenous children that both increased attendance and achievement also had implemented significant curriculum and teaching method reforms over the same period examined.

In other words, attending school may or may not help generally, but improving achievement depends on what children do once we get them to school.

In our view, there is no short cut around the need for substantial ongoing reforms of curriculum and teaching methods and affiliated professional development for teachers.  Building quality teaching and learning relations are the problem and the solution – not attendance or testing or accountability policies per se.


ladwig_james James Ladwig                        Allan Luke 2  Allan Luke

James Ladwig is an Associate Professor in the School of Education at the University of Newcastle and Adjunct Professor in the Victoria Institute of Victoria University.  He is internationally recognised for his expertise in educational research and school reform.

Allan Luke is Emeritus Professor in the ‎Faculty of Education at the Queensland University of Technology and Adjunct Professor in the Werklund School of Education, University of Calgary, Canada, where he works mentoring first nations academics. He is an educator, researcher, and theorist studying multiliteracies, linguistics, family literacy, and educational policy. Dr. Luke has written or edited over 14 books and more than 140 articles and book chapters.

Here is  the full article:  Does improving school level attendance lead to improved school level achievement? An empirical study of indigenous educational policy in Australia.