PISA is a program of large-scale assessments conducted for the Organisation for Economic Co-operation and Development (OECD). The program was launched in 1997 and the first cycle of PISA was conducted in 2000. According to the OECD, PISA was developed to generate internationally comparable evidence on student performance. The target population for PISA was all 15-year-old students enrolled in schools. However, 15-year-old students may be spread across several year levels and that spread can be different across jurisdictions and can change over time. This paper focuses on these effects within Australia. In PISA 2015, 75 per cent of 15-year-olds in Australia were in Year 10 with 11 per cent being in Year 9 and 14 per cent in Year 11. In Queensland the corresponding percentages were 51 per cent, 47 per cent and two per cent. In contrast, in Tasmania the distribution of 15-year-olds was 32 per cent in Year 9, 68 per cent in Year 10 and almost none in Year 11. There have been shifts in the distribution of 15-year-olds across year levels in some jurisdictions between PISA 2000 and PISA 2015. The largest shift has been for Western Australia. In 2000, approximately half (49%) of the 15-year-olds in PISA were in Year 11, compared to approximately one-eighth (13%) of the 15-year-olds being in Year 11 in 2015. This shift is probably attributable to the introduction of a foundation year from 2001 and 2002 as a half-year cohort with concomitant changes in the average school starting age that impacted on the age-grade distribution in Years 10 and 11 in PISA 2012. PISA estimates the score point difference associated with one year of schooling as being, on average, 30 score points on the PISA scale. Hence, a shift in the age–grade distribution over time could contribute to changes in the average reading scores for 15-year-olds. The shift in the distribution of students across year levels could have had a substantial effect for Western Australia and Tasmania with smaller effects for South Australia and Victoria. In this paper we use regression methods to generate “year-level adjusted” scores which we argue provide a more valid basis for comparisons among jurisdictions and across time and for analyses of associations with other student characteristics.