The National Assessment Program: Literacy and Numeracy (NAPLaN) is an annual literacy and numeracy test for all Australian students, and results from the test are disaggregated into a number of categories including Language Background Other Than English (LBOTE). For this and other categories, results on each section of the test are aggregated into state, territory and national means and standard deviations. The NAPLaN data indicate that since the test began, in 2008, there is little difference between the results of LBOTE and non-LBOTE students on all domains of the test. In 2010, the Year 9 LBOTE group was stronger in numeracy and spelling, similar in writing, grammar and punctuation and weaker in reading. These results defy a logic which might suggest that the LBOTE category could take account of the role of English as a second language in test results. Instead it suggests that a second language background can enhance performance. In this paper, I will interrogate the variation in the LBOTE category, using data provided by the Queensland state education department, focusing on year 9 students who participated in the 2010 test. Using multiple regression and focusing on variables which are specifically related to language background, I will show that within the LBOTE category there is a wide variation of performance, and the LBOTE data are in fact hiding some of our most disadvantaged students. I will suggest alternative ways in which language learners could be identified to better empower policy and pedagogical responses to student needs.
Abstract: