Cheating is a big problem. By my reading of the literature, around one in ten Australian university students has at some stage submitted an assignment they didn’t do themselves. Add to that other types of cheating such as using unauthorised material in exams, and emergent threats from artificial intelligence, and you have a fascinating, challenging and dangerous problem.
How can we address this problem? That’s a really hard question. When I talk about cheating I’ve learnt that I need to acknowledge a few big macro factors. So if you think cheating is caused by neoliberalism, under-funded education, or a fundamentally broken system of assessment then I’m not here to argue with you. But I don’t find those to be tractable problems for me, with the skillset I have.
I research what sorts of interventions we can do to address cheating within the everyday higher education situation we are in now. For my keynote at the Higher Education Research & Development Society of Australasia (HERDSA) conference this year I ranked different approaches to addressing cheating. I used the genre of a tier list to do so. Tier lists are used to show how good some ideas/interventions/albums/animals/foods are compared to others. Here’s my completed tier list for anti-cheating approaches:
The first thing to look at are the tiers: S, A, B, C, D, and F. The S-tier is where the most effective anti-cheating approaches live. Why ‘S’? That’s difficult to answer, and a fun rabbit hole to go down, but suffice to say that S is where the most effective approaches are, and F is where the least effective approaches are.
What’s on the tiers and why?
Central teams: dealing with cheating is an expert practice – its own job title these days – so concentrate those experts together and resource them well so they can take the load off everyday academics
Amnesty/self-report: rather than treating every case of cheating through an adversarial pseudo-legal process, we should also allow students to come forward and say “I’ve done something wrong and I’d like to make it right”
Programmatic assessment: zooming out from trying to stop cheating in every individual act of assessment, and instead thinking: how do we secure the degree as a whole from cheating?
Tasks students want to do: the rather obvious idea that students might cheat less in tasks they actually want to do
Vivas: having discussions with students about their work can be a great way to understand if they actually did it themselves
Document properties: people who investigate cheating cases look for all sorts of signals in document metadata that I don’t want to reveal here – but trust me, they are very useful evidence
Staff training: dealing with cheating is something we can get better at with training, for example, our research has found that people can get more accurate at detecting contract cheating with training
Learning outcomes: think carefully about the outcomes being assessed, and maybe don’t try to assess lower-level outcomes if you don’t need to as they are harder to secure
Proctoring/lockdown: there is strong evidence students score worse grades in remote proctored tests vs unsupervised online tests, which probably means they cheat less – but this needs to be balanced against privacy, surveillance and other concerns
Open book: if you make an exam open book you no longer need to stop people from bringing their notes in, eliminating a type of cheating (and often making for a better exam)
Content-matching: like it or hate it, the introduction of text-matching tools put a big dent on copy-paste plagiarism – though there are concerns around intellectual property and the algorithmization of integrity
Academic integrity modules: yes it’s important we teach students about integrity, but does anybody have evidence it actually reduces rates of cheating? (the answer is no as far as I know)
Honour codes: popular in Northern America, these require students to sign a document saying that they know what cheating and integrity are and that they’ll do the right thing… the problem is that their effects on cheating are quite small
Reflective practice: reflection matters but I’ve heard from a friend that apparently people lie and embellish a lot in these tasks (but of course I’ve never done that)
Legislation: laws that ban cheating sound like a good idea, and they might have some symbolic value, but despite being around since the 70s in some contexts there is no evidence they work (and some evidence they don’t work)
Site blocking: while it sounds like a good idea to block access to cheating websites, the problem is that these blocks are super-easy to circumvent for students, and if they also block educators from accessing sites they can be counter-productive
Authentic assessment: I LOVE AUTHENTIC ASSESSMENT AND IT SHOULD BE THE DEFAULT MODE OF ASSESSMENT (ok with that out of the way, let me be controversial: there’s just no evidence authentic assessment has effects at reducing rates of cheating, and there is evidence of industrial-scale cheating in authentic assessment)
Unsupervised multiple-choice questions: just don’t use these for high-stakes summative assessment; they are just the site of so much cheating/collusion (but do use them for formative tasks!)
Bans: there was talk about banning essays because they would stop essay mills and somehow miraculously stop cheating… the problem is that contract cheating sites don’t just make essays
Reusing tasks: thanks to sites like Chegg, once an assessment has been set you can assume the question and the answers are public knowledge (do click that Chegg link if you want to cry)
That’s where I’d put things on the list – what about you? If you’d like to revise the list, it’s available as a template on Canva (free account required).
Professor Phillip (Phill) Dawson is the Associate Director of the Centre for Research in Assessment and Digital Learning (CRADLE) at Deakin University. His two latest books are Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (Routledge, 2021) and the co-edited volume Re-imagining University Assessment in a Digital World (Springer, 2020). Phill’s work on cheating is part of his broader research into assessment, which includes work on assessment design and feedback. In his spare time Phill performs improv comedy and produces the academia-themed comedy show The Peer Revue.