Technical democracy, fairness and the UK exam algorithm: Making a ‘design Thing’ to explore bias in automated grading systems

Year: 2021

Author: Gulson, Kalervo, Swist, Teresa, Knight, Simon, Kitto, Kirsty

Type of paper: Symposium

Abstract:
In 2020, during the height of the COVID-19 pandemic, students in the UK could not sit their final matriculation exams, which are necessary for entry to University. As an alternative, instead of using teacher assessment of students, the UK exam regulator used a simple sorting algorithm to predict students’ A-Level grades. The A-level algorithm can be considered as a simple form of Artificial Intelligence (AI), specifically  an automated decision making tool. While  it was designed to ensure fairness through standardisation, this tool ultimately penalised students who performed better than might be expected based on their school context and student and school performance in previous years. The algorithmically produced grades were withdrawn after widespread student protestsThe UK exam algorithm controversy hurtled the topic of an automated grading system into the mainstream media spotlight and sparked much heated public debate amongst diverse publics/communities: students, parents, teachers unions, statisticians, exam boards, and education bodies. Suddenly, a range of intricate concepts (such as algorithms, fairness, bias, standardisation, prediction) became objects of collective attention and concern. While wide-spread media coverage and in-depth reports from different sectors offered detailed insights and analysis, the potential of educational/pedagogical tools for collective experimentation and learning about such socio-technical controversies remains underexplored. In this presentation we examine the participatory design process of developing an interface/data visualisation of the UK exam algorithm with a team of interdisciplinary researchers - including social, learning and computer scientists - and students. We propose that this ‘design Thing’ (Björgvinsson et al 2012) opens up technical democracy possibilities: to navigate the multidimensional bias and opacity of automated grading systems, plus negotiate more fair and inclusive alternatives with diverse publics/communities.

Back