How to fix the fascinating, challenging, dangerous problem of cheating

Cheating is a big problem. By my reading of the literature, around one in ten Australian university students has at some stage submitted an assignment they didn’t do themselves. Add to that other types of cheating such as using unauthorised material in exams, and emergent threats from artificial intelligence, and you have a fascinating, challenging and dangerous problem.

How can we address this problem? That’s a really hard question. When I talk about cheating I’ve learnt that I need to acknowledge a few big macro factors. So if you think cheating is caused by neoliberalism, under-funded education, or a fundamentally broken system of assessment then I’m not here to argue with you. But I don’t find those to be tractable problems for me, with the skillset I have.

I research what sorts of interventions we can do to address cheating within the everyday higher education situation we are in now. For my keynote at the Higher Education Research & Development Society of Australasia (HERDSA) conference this year I ranked different approaches to addressing cheating. I used the genre of a tier list to do so. Tier lists are used to show how good some ideas/interventions/albums/animals/foods are compared to others. Here’s my completed tier list for anti-cheating approaches:

The first thing to look at are the tiers: S, A, B, C, D, and F. The S-tier is where the most effective anti-cheating approaches live. Why ‘S’? That’s difficult to answer, and a fun rabbit hole to go down, but suffice to say that S is where the most effective approaches are, and F is where the least effective approaches are.

What’s on the tiers and why?


Swiss cheese: the layering of multiple different anti-cheating interventions can be more effective than just one intervention

Central teams: dealing with cheating is an expert practice – its own job title these days – so concentrate those experts together and resource them well so they can take the load off everyday academics

Amnesty/self-report: rather than treating every case of cheating through an adversarial pseudo-legal process, we should also allow students to come forward and say “I’ve done something wrong and I’d like to make it right”

Programmatic assessment: zooming out from trying to stop cheating in every individual act of assessment, and instead thinking: how do we secure the degree as a whole from cheating?


Tasks students want to do: the rather obvious idea that students might cheat less in tasks they actually want to do

Vivas: having discussions with students about their work can be a great way to understand if they actually did it themselves

Stylometry: a range of emerging technologies that can compare student assignments with their previous work to see if they were likely all written by the same person (hopefully the student)

Document properties: people who investigate cheating cases look for all sorts of signals in document metadata that I don’t want to reveal here – but trust me, they are very useful evidence

Staff training: dealing with cheating is something we can get better at with training, for example, our research has found that people can get more accurate at detecting contract cheating with training


Learning outcomes: think carefully about the outcomes being assessed, and maybe don’t try to assess lower-level outcomes if you don’t need to as they are harder to secure

Proctoring/lockdown: there is strong evidence students score worse grades in remote proctored tests vs unsupervised online tests, which probably means they cheat less – but this needs to be balanced against privacy, surveillance and other concerns

Open book: if you make an exam open book you no longer need to stop people from bringing their notes in, eliminating a type of cheating (and often making for a better exam)

Content-matching: like it or hate it, the introduction of text-matching tools put a big dent on copy-paste plagiarism – though there are concerns around intellectual property and the algorithmization of integrity

Better exam design: a grab bag of clever tricks test designers use that I can’t explain in a sentence but trust me if you do tests you should look it up

Face-to-face exams: these are ok, not great, and likely the site of more cheating than we think, but if you need to assess lower-level learning outcomes they are solid B-tier material


Academic integrity modules: yes it’s important we teach students about integrity, but does anybody have evidence it actually reduces rates of cheating? (the answer is no as far as I know)

Honour codes: popular in Northern America, these require students to sign a document saying that they know what cheating and integrity are and that they’ll do the right thing… the problem is that their effects on cheating are quite small

Reflective practice: reflection matters but I’ve heard from a friend that apparently people lie and embellish a lot in these tasks (but of course I’ve never done that)

Legislation: laws that ban cheating sound like a good idea, and they might have some symbolic value, but despite being around since the 70s in some contexts there is no evidence they work (and some evidence they don’t work)


Site blocking: while it sounds like a good idea to block access to cheating websites, the problem is that these blocks are super-easy to circumvent for students, and if they also block educators from accessing sites they can be counter-productive

Authentic assessment: I LOVE AUTHENTIC ASSESSMENT AND IT SHOULD BE THE DEFAULT MODE OF ASSESSMENT (ok with that out of the way, let me be controversial: there’s just no evidence authentic assessment has effects at reducing rates of cheating, and there is evidence of industrial-scale cheating in authentic assessment)


Unsupervised multiple-choice questions: just don’t use these for high-stakes summative assessment; they are just the site of so much cheating/collusion (but do use them for formative tasks!)

Bans: there was talk about banning essays because they would stop essay mills and somehow miraculously stop cheating… the problem is that contract cheating sites don’t just make essays

Reusing tasks: thanks to sites like Chegg, once an assessment has been set you can assume the question and the answers are public knowledge (do click that Chegg link if you want to cry)

That’s where I’d put things on the list – what about you? If you’d like to revise the list, it’s available as a template on Canva (free account required).

Professor Phillip (Phill) Dawson is the Associate Director of the Centre for Research in Assessment and Digital Learning (CRADLE) at Deakin University. His two latest books are Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (Routledge, 2021) and the co-edited volume Re-imagining University Assessment in a Digital World (Springer, 2020). Phill’s work on cheating is part of his broader research into assessment, which includes work on assessment design and feedback. In his spare time Phill performs improv comedy and produces the academia-themed comedy show The Peer Revue.

Academics, we need useful dialogues not monologues

(Illustration by Oslo Davis Copyright Oslo Davis 2022. Used with permission. www.osldavis.com)

Some things in academia become normalised as meme-worthy ‘Shit Academics Say. Sure, senior academics evoking the ‘more of a comment than a question’ post-conference presentation is not the most pressing of issues in academia.

But it’s a behaviour that we, two early career researchers, picked up on straight away at our first in-person academic conference, HERDSA. These observations aren’t unique to this conference (editor’s note: totally!), but it being our first in-person conference, we found it apt to discuss as we found these non-question monologues to be ill-timed and even problematic. In raising the issue, we would like to productively discuss not only what we noticed but how we believe conference organisers, session chairs and audience members can improve the experience for presenters and attendees.  

As newbies to the in-person conference, we looked forward to the opportunity to engaging with top researchers in our field about their findings. In many ways, HERDSA 2022 met these expectations.

 Unfortunately, with limited time for questions, we may not have always been able to both form our question and get the attention of the mic-holder before the less-of-a-question-more-of-a-comment attendee. At the end of nearly every presentation, we noted there was at least one audience member who stole the floor, eating away at the limited Q&A time, to offer their opinion or make a lengthy comment.  

In one session, after a tremendous keynote speech delivered by Professor Michelle Trudgett, around her research with supporting Indigenous Early Career Researchers, the very first comment made was that Aboriginal leadership in universities should ensure non-Indigenous people are aware of issues facing Indigenous staff and students. In our observation, this comment seemed odd given that the keynote had just spoken at length about the additional workload that Aboriginal leaders are expected to cover at universities. This is one example of where an audience member diverted the discussion around what was being presented, to focus the discussion on something unrelated to the presentation (and engaging in whataboutism). An audience member then commented on the microaggression implicit in the man’s comment i.e., that of requesting Aboriginal leaders to take the additional load of educating non-Indigenous colleagues when they had just been presented data suggesting that Aboriginal leaders are overworked. This exchange stirred a robust discussion within the room, and eventually allowed others to actively draw on the points addressed in the keynote. Although unimpressed with the initial “question” raised, I (Tanoa) enjoyed observing the room participation and the conversational exchanges that did address points relating to the keynote; the latter, to me (Tanoa), was a demonstration of what engaging academic conversations should be. Although we use this example, we witnessed similar exchanges multiple times across the 3-day conference.  

We pose a couple of speculations as to why an audience member may use the Q&A in an unproductive way; we believe some are unintended, while others are less benign: 

  •   Unsure how to succinctly frame the question

When a presentation has got our brains buzzing with thoughts and ideas, it can be difficult to make clear connections and articulate them. As one academic pointed out, what often arises is a comment with many entangled parts, not a straightforward question. That resonates with us, and what we found helpful was to keep a notebook, take notes and save the reflection or half-formed question for discussion after the presentation where we could discuss in a more apt setting, (in-person during tea or via email or by requesting a Zoom catch-up). HERDSA provided great resources, such as an events app, which allowed attendees to be able to connect with presenters, should there be any follow up questions or comments.

  • Using the Q&A for validation

As one academic expressed, it could be that the attendee does know how to frame their question, they just don’t want to ask one. Instead, they’re essentially wanting the presenter to agree with them.

  •  Using the Q&A for one’s own gain

Not all questions are good questions, and some audience members may disguise a question to signpost to their own research or expertise in the matter.

  • Using the Q&A as a microaggression

A microaggression, in this context, is a verbal indignity – often flying under the radar due to its cunning context. A microaggression is not the same as a respectful debate; although, the post-presentation Q&A may also not be an appropriate time to engage in a one-on-one debate. An alternative might be to take it to academic twitter!

By and large, we noticed non-question “questions” were posed to female-presenting presenters by male-presenting audience members. Our observations are in alignment with research that concluded that women ask fewer and shorter questions than men. Additionally, it has been found that senior academics ask more questions than junior academics.

We witnessed many thought-provoking presentations at HERDSA, and we both engaged with and listened in on many stirring conversations; we believe that conference organisers and/or session chairs can and should make space for discussion to flow. This is made possible with conference organisers and chairs proactively communicating with attendees around ‘housekeeping’. For example, it could be clearly stated if there will or will not be time for comments and reflections. In one session in particular, the session chair, Dr. Wade Kelly, set a (paraphrased) precedent that questions should be questions and advised that an ideal question is 10 words or less.

Further, conference organisers can ensure that attendees can be/feel heard by ensuring a range of formats. The traditional, one-way style of presentation leaves little opportunity, during the session, for audience members to engage with each other and minimal time for questions. HERDSA offered alternatives like non-hierarchical fishbowl or roundtable discussions in which attendees could better engage with each other – and the facilitator!

Audience members are also accountable for how they navigate and engage in productive conversations. We like the helpful Conference Monkey guide written by Georgina Torbet and would like to add some additional considerations to asking a question after an academic presentation:

1.       Firstly, (which is our whole point), is this actually a question, or are you showing off? Consider if you already know the answer.

2.       Will the response to this question directly impact what you do? (i.e., is your question authentic; is it informing practice/research?)

3.       Has somebody else already asked this question? Or could a response that was previously given also apply to your question?

No? Great!

If you have made it this far, then your question is probably valid and engaging, so we propose the following when asking your questions:

  • Write down your question (or concepts)
  • Be mindful of time and ask only one question (more if time permits)
  • If necessary, do a quick introduction
  • Re-consider if your question requires a backstory (it probably doesn’t)
  •  It’s okay to briefly thank the presenter

4.       Be mindful of the space you are in and the space you are “allowed” to occupy.

The keynote speaker has been invited into the space because the conference organisers/executive team determined that their research is valuable to the academic audiences that have chosen to attend. Give presenters the respect they deserve and don’t centre yourself.

Finally, it’s totally fine to have a critical question. However, question sessions aren’t intended to be a ‘gotcha’ moment; present criticism constructively.

Ameena Payne is a PhD student at Deakin’s Centre for Research in Assessment and Digital Learning (CRADLE). Ameena is a recipient of her alma mater’s Outstanding Young Alumna Award (2022) and several teaching excellence commendations. She is a Fellow of the Higher Education Academy (AdvanceHE) and the Higher Education Research and Development Society of Australasia (HERDSA).

Ashah Tanoa, a Pinjareb/Whadjuk Noongar woman from Perth, Western Australia, is an Associate Lecturer at Murdoch University. Ashah is studying a Master of Education by Research, looking at Indigenous student retention rates and what influences a student’s decision to leave within their first year at university. She was the 2021 recipient of the Vice Chancellor’s award for Excellence in Enhancing Learning. In 2022, Ashah was accepted to present at the HERDSA conference in Melbourne, on an evaluation of an innovative unit that teaches the hidden curriculum to Aboriginal and Torres Strait Islander students.