Approaches to establishing dependability in the judgement of teacher performance assessments

Year: 2018

Author: Adie, Lenore, Haynes, Michelle

Type of paper: Abstract refereed

Dependability of judgements in higher education is a matter of on-going concern. ‘Dependability’ is understood as the relationship between validity and reliability: “the highest optimum reliability that can be reached whilst preserving construct validity” (Harlen, 2005, p. 248). Moderation is the process that connects standards-based assessments to increased dependability and comparability of the assessment results, providing a mechanism for quality control. We draw on the example of the moderation of teacher performance assessments using the case of the Graduate Teacher Performance AssessmentÓ (ACU, 2017). One requirement of national program standards for accreditation in teacher education programs is the inclusion of “moderation processes that support consistent decision-making against the achievement criteria” in the assessment of TPAs (AITSL, 2015, p. 10).
In this paper, we present findings from the 2017 trial of the GTPA that involved thirteen universities across six Australian states and territories. The paper addresses standards of evidence, judgement consistency and sources of variance in judgement-making. It also explores the issue of generalisation. The moderation activities undertaken throughout the trial are presented. The focus is on what we learnt about consistency of judgements as supported by moderation from the trial, and the utility of moderation processes for teacher educators to inform curriculum review and program planning. The paper describes the processes and practices involved in scoring GTPAs, applying the scoring rubric, using the purpose-designed decision-aids, and applying judgement methodologies. We draw on the statistical analysis to discuss demonstrated rater-consistency and illustrate some patterns of performance.