Abstract:
In the high stakes milieu of Year 12 assessments that determine, in large part, students' subsequent access to tertiary education and/or the workplace, the rhetoric of 'school effectiveness' has been reconstructed in terms of the market-place language of 'winners' and 'losers'. Unfortunately, the growing use of Year 12 'school effectiveness' indicators in the form of students' mean examination scores - aggregated at the school-level - tends to be narrowly focused on annual productions of 'league table'-type RANKINGS of schools' results rather than on locating and estimating the magnitudes of major sources of variation designed to EXPLAIN variation within and between schools. This paper addresses some of the prevailing myths about 'school effectiveness' by reporting key findings from multivariate, multilevel analyses of data obtained over 5 years (1994-1998) for 53 studies (subjects) of the Victorian Certificate of Education (VCE), from 2.2 million students located in 600 government, Catholic, independent and adult VCE providers. After adjusting for the effects of student gender, 'general ability' and sector, the results indicate significantly greater variation across studies within schools than between schools. It is concluded that providers of secondary schooling are 'effective' only to the extent that they establish 'effective' teaching and learning programs that maximise students' achievements across ALL areas of the curriculum.The findings are discussed in terms of their implications for both policy and practice.