Year: 2015
Author: Christie, Michael, Simon, Susan, Grainger, Peter
Type of paper: Abstract refereed
Abstract:
In this paper we report on one part of a peer review project that aimed at improving the quality of teaching and learning in Master of Education courses. The overall project aimed at exploring the benefits of a transferable, online, blended learning model of peer review for Masters courses at two regional universities in Australia and the US. The research methodology used was participative action research (Bradbury & Reason, 2008). Our research methods included surveys, interviews and critical incidents. The project was funded for 18 months beginning in 2014 and ending mid 2015. At the end of the first semester results from the pre and post surveys indicated that the second cycle of research should be divided into three sub projects. One looked at the use of online video in peer review, another focused on written tasks and a third concerned itself with the professional development of lecturers and their use of effective criterion referenced assessment sheets, or rubrics, in peer review. The third sub project that is the focus of this paper. Peer feedback sheets that incorporated key criteria from the rubric for a particular task were developed. The tasks included both oral and written assessments. Sometimes lecturers collected the feedback sheets and used them to assist in their grading (with student permission) while on other occasions the completed feedback sheets were handed straight back to the students. Students appreciated the feedback that they were given by both peers and lecturers and said that discussing the feedback sheets and the rubrics they were based on, in class, had helped improve their learning. They specified that using feedback sheets in peer review discussions also enhanced academic discourse among themselves, especially among domestic and international students. According to the lecturers, leading the above discussions helped their teaching because it focussed the students’ attention on what was required in order to successfully complete the set task. In interviews and in some of the critical incidents however, it became clear that both lecturers and students thought that the rubrics, on which the feedback sheets were based, were of uneven quality. Some of the standard descriptors in the rubrics did not define exactly what a student needed to do in order to achieve a particular grade but, instead, simply used superlatives – writes well versus writes very well, for example. As a result of our findings a set of questions that lecturers could ask themselves before writing rubrics was developed using the Delphi technique (Green, Armstrong & Graefe, 2007). These questions, agreed upon by a group of experts, served both as a means of evaluating existing rubrics and of developing new ones.