You know there is something going wrong with Australia’s national testing program when the education minister of the largest state calls for it to be axed. The testing program, which started today across the nation, should be urgently dumped according to NSW Education Minister, Rob Stokes, because it is being “used dishonestly as a school rating system” and that it has “sprouted an industry that extorts money from desperate families”.
I think it should be dumped too, in its current form, but for an even more compelling reason than Stokes has aired. I believe we are not being honest with parents about how misleading the results can be.
The Federal Minister for Education, Simon Birmingham, has rejected the NSW minister’s call, of course, arguing that “parents like NAPLAN”. Birmingham is probably right. Many parents see NAPLAN scores as one of the few clear indications they get of how their child’s performance at school compares to other children.
Parents receive a NAPLAN student report showing their child’s score as dots on a band, clearly positioned in relation to their school average and national average. It all looks very precise.
But how precise are the NAPLAN results? Should parents, or any of us, be so trusting of the results?
There is considerable fallout from the reporting of NAPLAN results, so I believe it is important to talk about what is going on. The comparisons we make, the decisions we make, and the assumptions we make about those NAPLAN results can all be based on very imprecise data.
How NAPLAN results can be widely inaccurate
While communication of results to parents suggests a very high level of precision, the technical report issued by ACARA each year suggests something quite different. Professor Margaret Wu, a world leading expert in educational measurement and statistics, has done excellent and sustained work over a number of years on what national testing data can (and can’t) tell us.
Her work says that because of the relatively small number of questions asked in each section of NAPLAN tests, that are then used to estimate a child’s performance for each (very large) assessment area, there is a lot of what statisticians call ‘measurement error’ involved. This means that while parents are provided with an indication of their child’s performance that looks very precise, the real story is quite different.
Here’s an example of how wrong the results can be
Figure A is based on performance on the 2016 Year 7 Grammar and Punctuation test: in this case, the student has achieved a score of 615, placing them in the middle of Band 8. We can see that on the basis of this, we might conclude that they are performing above their school average of about 590 and well above the national average of 540. Furthermore, the student is at the cut-off of the 60% shaded area, which means their performance appears to be just in the top 20% of students nationally.
However, Figure B tells a different story. Here we have the same result, with the ‘error bars’ added (using the figures provided in the 2016 NAPLAN Technical Report, and a 90% Confidence Interval, consistent with the MySchool website). The solid error bars on Figure B indicate that while the student has received a score of 615 on this particular test, we can be 90% confident on the basis of this that their true ability in grammar and punctuation lies somewhere between 558 and 672, about two bands worth. If we were to use a 95% confidence interval, which is the standard in educational statistics, the span would be even wider, from 547 to 683 – this is shown by the dotted error bars.
In other words, the student’s ‘true ability’ might be very close to the national average, toward the bottom of Band 7, or quite close to the top of Band 9.
That’s a very wide ‘window’ indeed.
Wu goes on to note that NAPLAN is also not very good at representing student ability at the class or school level because of what statisticians call ‘sampling error’, the error caused by variation in mean scores of students due to the characteristics of different cohorts. (Sampling error is affected by the number of students in a year group – the smaller the cohort size, the larger the sampling error. Wu points out that the margin of error on school means can easily be close to, or indeed larger than, one year of expected annual growth. So for schools with cohorts of 50 or less, a very significant change in mean performance from one year to another would be possible just on the basis of sampling error.)
The problem is school NAPLAN results are published on the MySchool website. Major decisions are made based on them and on the information parents get in their child’s individual report; parents can spend a lot of money (tutoring, changing school, choosing a school) based on them. As Minister Stokes said a big industry has developed around selling NAPLAN text books, programs and tutoring services. But the results we get are not precise. The precision argument just doesn’t hold. Don’t fall for it.
Any teacher worth their salt, especially one who hadn’t felt the pressure to engage in weeks of NAPLAN preparation with their students, would be far more precise than any dot on a band, in assessing their students’ ability. Teachers continually assess their students and continually collect substantial evidence as to how their students are performing.
Research also suggests that publishing the NAPLAN results on the MySchool website has played a driving role in Australian teachers and students experiencing NAPLAN as ‘high stakes’.
So is NAPLAN good for anything?
At the national level, however, the story is different. What NAPLAN is good for, and indeed what it was originally designed for, is to provide a national snapshot of student ability, and conducting comparisons between different groups (for example, students with a language background other than English and students from English-speaking backgrounds) on a national level.
This is important data to have. It tells us where support and resources are needed in particular. But we could collect the data we need by using a rigorous sampling method, where a smaller number of children are tested (a sample) rather than having every student in every school sit tests every two years. This a move that would be a lot more cost effective, both financially and in terms of other costs to our education system.
So, does NAPLAN need to be urgently dumped?
Our current use of NAPLAN data definitely does need to be urgently dumped. We need to start using NAPLAN results for, and only for, the purpose for which they are fit. I believe we need to get the individual school results off the MySchool website for starters. That would quite quickly cut out much of the hype and anxiety. I think it is time, at the very least, to be honest with parents about what NAPLAN does and doesn’t tell them about their children’s learning and about their schools.
In the process we might free up some of that precious classroom time for more productive things than test preparation.
*With thanks to A/Prof James Ladwig for his helpful comments on the draft of this post.
Dr Nicole Mockler is an Associate Professor in Education at the University of Sydney. She has a background in secondary school teaching and teacher professional learning. In the past she has held senior leadership roles in secondary schools, and after completing her PhD in Education at the University of Sydney in 2008, she joined the University of Newcastle in 2009, where she was a Senior Lecturer in the School of Education until early 2015. Nicole’s research interests are in education policy and politics, professional learning and curriculum and pedagogy, and she also continues to work with teachers and schools in these areas.
Nicole is currently the Editor in Chief of The Australian Educational Researcher, and a member of the International Advisory Board of the British Educational Research Journal and Educational Action Research. She was the Communications Co-ordinator for the Australian Association for Research in Education from 2011 until 2016, and until December 2016 was Associate Editor for both Critical Studies in Education and Teaching and Teacher Education.
(Note to readers from the Ed. We are having tech problems with our share counters at the moment across some search engines and devices. For those interested – as of 5:25pm 18/5/18 this post had been shared over 1000 times (1K+) on FB and 102 on Twitter.)
6 thoughts on “It’s time to be honest with parents about NAPLAN: your child’s report is misleading, here’s how”
Today on many sites there have been discussions such as this one as to the benefits or otherwise of NAPLAN. This, however, is the first one that acknowledges that one of the disadvantages of NAPLAN is the immense loss of engaged learning time in important areas of the curriculum. This loss is due to the pressure from principals for children to practise test procedures, how to do the test on computers and other NAPLAN related activities. Friends who are teachers state that the loss of 5 weeks of teaching time to practise for NAPLAN; all in the name of literacy and numeracy at the lowest of levels. Is this truly a worthwhile use of children’s time in the twenty-first century where the basic literacies of critical, creative, and ethical thinking are just as important.
Thankyou for identifying the immense loss of time as a key factor in the NAPLAN debate. That is the hidden curriculum of 2018.
Thanks for your comment, Rose-Marie.
Any teacher worth their salt could have forecast how NAPLAN results were going to be hijacked by political parties the moment they wanted to publish these results on MySchool.
Nicole has rightly pointed out the classroom teacher who interacts with their students has the best understanding and knowledge of any individual student’s capability.
NAPLAN is only a snapshot in time of a student’s educational journey not the make or break as some pundits wish to portray it.
A true educator is a person who elicits trust and belief in a student; facilitates their growth as a learner so that they will strive to continually better themselves.
One off testing does not provide the full picture of a student’s growth nor does it highlight failings in the educational system. The reason being as stated in the article is people are now gaming the system and the data is not being used for its intended purpose.
The most qualified person to tell you about the ability of any student is their teacher(s) not some one off test.
Thanks for your comment, Paul.
Thanks for identifying the flaws in the statistical underpinnings of how NAPLAN is used and interpreted.
Has anyone examined the assumptive underpinnings of what NAPLAN actually measures? What are the operational definitions of ” effective reading” and ” effective writing” which the designers of NAPLAN are (allegedly) measuring?
I think the negative effects on our children’s sense of self-confidence are under-considered by these decision makers. When my daughter took the NAPLAN in year 7, she was devastated by her results in mathematics in particular. In an attempt to try and quantify her results, I was able to show her the data on the back of the report. She had gotten a total of 11 questions wrong out of about 50, I think. Of those questions, only 4 had been answered correctly by more than 50% of the students!
I then drilled a little deeper to find out that only two of the questions could be considered related to one area of knowledge and that, therefore, she could consider that one area to be something she needed to be aware could use improvement.
‘What about those parents who don’t think to look at the back of the report’? ‘What about those parents who are placing undue value on these tests’? What is happening to the self-esteem of those children?
The bar graphs are not a spectacular way to present the results. Maybe, if the test is to be continued, the data around number of questions wrong, percentage of students who have successfully completed that question – you know, the graph on the BACK of the pictoral representation – could be the one which is highlighted. It is much more useful as far as I am concerned.
Comments are closed.