The appropriateness of professional judgement to determine performance rubrics in a graded competency based assessment framework

Year: 2003

Author: Bateman, Andrea, Griffin, Patrick

Type of paper: Abstract refereed

Abstract:
With the implementation of competency based assessment within the Australian vocational education and training (VET) sector the focus has been on valid and reliable assessments to ensure that there are consistent outcomes across training providers. Underpinning this has being the notion of providing assessment judgements within a dichotomous reporting framework; that is competent or not yet competent. This study investigated the appropriateness of professional expertise in developing performance rubrics for competency as defined by the Public Services Training Package. Levels of performance were identified along a continuum for interpretive purposes and competency decision making. Groups of judges estimated the relative difficulty of each of the rubrics. Item response theory calibrated the rubrics. A comparison of judges’ estimates of difficulty and interpretation of developmental continuum, was compared to the outcomes of item response analysis. The findings indicated that the specialists who developed the it ems and their relative difficulty levels were accurate in their judgments. The internal consistency measure was high indicating that the assessment instrument was a reliable measure of the construct. The criterion validity measure (person separation index) was high. There was room for improvement in terms of the construct validity (item separation index) of the instrument. The study concluded that standard setting using subject matter experts proved adequate for developing performance rubrics.

Back