Synthetic Governance: Causal Claims, Machine Learning And Experiments In Pursuit Of Policy Certainty

Year: 2021

Author: Gulson, Kalervo, Sellar, Sam

Type of paper: Symposium

This paper will provide an account of an Australian education system’s attempt to use machine learning to disrupt enduring inequities in educational outcomes. The education system has used external consultants – a not-for-profit Artificial Intelligence (AI) institute – to bring new expertise into the system. These consultants have been employed to look at what causal factors matter and could be changed to make a difference to inequities in educational outcomes. The AI institute uses an approach from the field of causal inference problems, combining with the predictive aspect of machine learning (ML) to make causal claims that guide interventions. In this paper, we ask whether this policy experiment in identifying causality may end up ‘disrupt[ing] established material and institutional arrangements for producing and verifying truth’ (Crogan, 2019, p. 58). Can the use of causal inference and ML, as particular approaches to managing uncertainty in governing, be used to disrupt long term inequities in the education system?Our analysis involves reconceptualizing governance as performed conjunctively by machines and humans. Drawing on 10 interviews with stakeholders in this education system, the paper will outline how data scientists combine techniques from the field of causal inference and ML that involve machine, but not human, interpretable calculations. The assumptions regarding causal inferences are located within the educational system, drawing on research and expertise from within the system to build models. The ML approaches are not specific to education and are applied widely in ML. We try to understand how this combination of machine/human approaches works in a policy setting through the notion of synthetic governance (Gulson, Sellar, & Webb, In press). We define synthetic governance as an amalgamation of human and machine governance comprising: (i) human classifications, rationalities, values and calculative practices; (ii) new forms of computation, or what we might consider to be non-human political rationalities, that are changing how we think about thinking; and, (iii) the new directions made possible for education governance by algorithms and AI. We explore whether this experiment in pursuit of ‘policy certainty’ by an education system, which is illustrative of synthetic governance, can create new policy truths – new kinds of machine/ human policy learning –that can disrupt enduring forms of inequity.