Phillip Dawson

Education: the five concerns we should debate right now

Meghan Stacey on the trouble with teaching

Deb Hayes on making school systems more equitable.

Phillip Dawson on how we should treat ChatGPT.

Sarah O’Shea on widening participation at university.

Scott Eacott on the Productivity Commission’s review of the National School Reform Agreement.

The trouble with teaching by Meghan Stacey

Last year was a big one for teachers. In NSW, where I live and work, years of escalating workload, the relentless intensity of the job and salaries that are declining in real terms were compounded by reports of debilitating staff shortages leading to considerable strike action. The latter half of 2022 then saw a NSW inquiry and a federal action plan aiming to address such shortages. Nevertheless, government responses to these issues have been critiqued as focusing too much on supply and not enough on retention. Concerns about teachers’ working conditions do not seem to have really been heard, and there’s not much point talking about supply, or any other challenge for education in 2023, until we truly have that conversation. 

Australian teachers work long hours, and complete considerable administrative labour when compared internationally. It is true that some steps are being taken to reduce teachers’ administrative load, but not always in a way that recognises the intellectual and creative complexity of their work. And according to the Teachers Federation, when NSW schools go back in just a few days, they will be starting their year with a whopping 3,300 vacant positions. So there is much still to be done, and I wonder: in 2023, will action be taken that adequately addresses the depth of disquiet rumbling amongst the profession?

Making our schooling systems more equitable by Deb Hayes

This draws on parts of my book with Craig Campbell.

In terms of funding, how much is enough to provide a good education to an Australian child? This question has occupied policymakers for decades. 

In 1973, a Whitlam-appointed committee proposed eight school categories A-H, A being the highest. It argued that support for schools in Category A with resource levels already above agreed targets be phased out because government aid could not be justified for maintaining or raising standards beyond those that publicly funded schools could hope to achieve by the end of the decade.

Today, Commonwealth funding for schools is needs-based and calculated according to the Schooling Resource Standard, which estimates how much public funding a school needs to meet its students’ educational needs. 

Sounds good? Well, not really, because schools that already have enough to provide a good education receive federal government funding due to an amendment by Fraser to Whitlam’s proposal. Under current funding arrangements, public schools in all states except the ACT will be funded at 91% of their SRS index or less by 2029.

It’s time to pause government funding to non-government schools that already have enough to provide a good education until all public schools are funded at 100% of their SRS.

Challenges for Widening Participation by Sarah O’Shea

2023 will usher in both challenges and opportunities for widening participation in Australian higher education, not least of which is the predicted growth in school leavers. Those born under the Costello ‘baby boom’ of 2005-2008 will be leaving school from 2023, with an almost 20% increase in this age cohort by 2030 (Productivity Commission, 2022). While this is good news for a post pandemic higher education sector, ensuring equity in the face of greater competition for places will be an important challenge moving forward for the sector. 

Given the greater number and diversity of participants, how we conceive and categorise what ‘equity’ is, will be key. The current six equity categories were defined in the eighties and research has indicated the limitations of these outdated classifications (Tomaszewski et al, 2018). The diversity of current and future student populations requires expanded definitions of ‘who’ our equity students are, definitions are that are both more granular in nature and also, responsive to emerging trends in higher education participation. Any changes in definitions need to be accompanied by a better measurement of cumulative disadvantage, recognising that students often belong to multiple equity groups and that this membership has distinct impacts at different stages of the student life cycle (Tomaszewski et al., 2020). 

For 2023 and beyond, contextualised and targeted approaches to supporting growing diverse populations will be required. Approaches that cater to both the uniqueness of our students and also, specific learning environments. Of course, such bespoke approaches to widening participation are resource intensive. In an environment of reduced funding and pressure to meet equity performance benchmarks, this too will be a challenge for institutions, particularly regional universities who are often serving the most disadvantaged communities. 

ChatGPT is the least capable it will ever be. What can we do about it. By Phillip Dawson

I asked ChatGPT to “Write 200 words about the challenge that ChatGPT poses to education
in 2023”. It gave me a 283 word response, starting with:
“ChatGPT poses several challenges to education in 2023. One of the most significant
challenges is that it has the ability to generate human-like text, which makes it difficult to
distinguish between machine-generated content and that written by a human. This can lead
to concerns about plagiarism and the authenticity of information, making it more difficult
for educators to ensure that students are producing original work.”
Like much of the media commentary, ChatGPT has jumped straight to academic integrity
concerns. Yes, this is a challenge. Assessment has become harder thanks to the emergenceide
of generative artificial intelligence.
But that’s not the whole picture. These tools are getting more and more sophisticated, and
are already used in society to be creative, have fun, and get stuff done. Rather than focusing
so much on the threats posed by ChatGPT traditional assessment practices, we might need
to question how fit for purpose our assessments are for the world our students will inhabit
when they graduate. Because these tools are currently the least capable they’ll ever be.
I hope 2023 is the year where we double down on what we could call “future-authentic
assessment”: assessment that considers what’s likely to happen to the world.

Where’s the discussion of funding? Scott Eacott on the Productivity Commission’s review of the National School Reform Agreement.

Setting national education policy is a complex task. This is made even more difficult in Australia given the constitutional responsibility for education lies with the states and territories but the
commonwealth government controls the finances. Therefore, while legislation and national
declarations establish the social contract between government and its citizens (equity and
excellence in school provision), jurisdictional sovereignty can get in the way of reform.

Friday’s release of the Productivity Commission’s review of the National School Reform Agreement
(NSRA) highlights the complexity. The NSRA is a joint agreement between the Commonwealth,
states, and territories with the objective of delivering high quality and equitable education for all
Australian students (the social contract). There is a lot to unpack in the review with considerable
media attention on the failure of the NSRA to improve student outcomes. I want to raise three key
systemic points:

1) Common critiques of federalism focus on overlap in responsibilities (e.g., funding of schools) and
duplication as state and territory groups replicate national policies and initiatives (e.g.,
professional standards, curriculum). This imposes artificial divisions in a complex policy domain
whose actions impact well beyond state or territory borders. There are reduced opportunities
for engagement, surrendering some of the strengths of a federal system of government and the
removal of important failsafe mechanisms, as each jurisdiction seeks to assert its independence
and sovereignty. Achieving uniformity across eight jurisdictions is difficult, time consuming, and
often reduces initiatives to the lowest common denominator.
2) Despite some concerns about new data points (e.g., additional testing, administrative
paperwork), the review calls for greater reporting and transparency from states and territories.
In most – if not all – cases, the data points already exist. What the review argues is for a common
basis for new targets but greater flexibility in how jurisdictions pursing delivering on them. This
flexibility comes with greater accountability for performance of reforms against benchmarks.
That is, each jurisdiction will be held to accountable for how their reforms deliver on targets.
Such reporting would make it clear when reforms are, and are not, working for students.

3) Funding was excluded as a topic for discussion in the review. Since at least the first Gonski
Report, the funding of Australian schools has been a central issue. As the NSRA was established
on the back of a $319B funding deal for schools, the achievement of its objective cannot be
achieved unless funding mechanisms ensure equitable distribution of funds to schools and
specifically the targeting of funding to those schools and students most disadvantaged.

As noted, there is plenty to unpack, and the above just point to some key systemic issues in design in a
process focused on improving outcomes for students and holding jurisdictions to account for their
reforms in meeting agreed targets.

Meghan Stacey is a senior lecturer in the UNSW School of Education, researching in the fields of the sociology of education and education policy and is the director of the Bachelor of Education (Secondary). Taking a particular interest in teachers, her research considers how teachers’ work is framed by policy, as well as the effects of such policy for those who work with, within and against it. She is an associate editor, The Australian Educational Researcher Links: Twitter & University Profile

Debra Hayes is professor of education and equity, and head of the University of Sydney School of Education and Social Work.  Her most recent book (with Ruth Lupton) is Great Mistakes in Education Policy: How to avoid them in the Future (Policy Press, 2021). She tweets at @DrDebHayes.

Professor Phillip (Phill) Dawson is the Associate Director of the Centre for Research inAssessment and Digital Learning (CRADLE) at Deakin University. His two latest books are Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (Routledge, 2021) and the co-edited volume Re-imagining University Assessment in a Digital World (Springer, 2020). Phill’s work on cheating is part of his broader research into assessment, which includes work on assessment design and feedback. In his spare time Phill performs improv comedy and produces the academia-themed comedy show The Peer Revue.

Sarah O’Shea is a Professor and Director of the National Centre of Student Equity in Higher Education at Curtin University. Sarah has over 25 years experience teaching in universities as well as the VET and Adult Education sector, she has also published widely on issues related to educational access and equity.

Scott Eacott PhD, is deputy director of the Gonski Institute for Education, and professor of education in the School of Education at UNSW Sydney and adjunct professor in the Department of Educational Administration at the University of Saskatchewan.

How to fix the fascinating, challenging, dangerous problem of cheating

Cheating is a big problem. By my reading of the literature, around one in ten Australian university students has at some stage submitted an assignment they didn’t do themselves. Add to that other types of cheating such as using unauthorised material in exams, and emergent threats from artificial intelligence, and you have a fascinating, challenging and dangerous problem.

How can we address this problem? That’s a really hard question. When I talk about cheating I’ve learnt that I need to acknowledge a few big macro factors. So if you think cheating is caused by neoliberalism, under-funded education, or a fundamentally broken system of assessment then I’m not here to argue with you. But I don’t find those to be tractable problems for me, with the skillset I have.

I research what sorts of interventions we can do to address cheating within the everyday higher education situation we are in now. For my keynote at the Higher Education Research & Development Society of Australasia (HERDSA) conference this year I ranked different approaches to addressing cheating. I used the genre of a tier list to do so. Tier lists are used to show how good some ideas/interventions/albums/animals/foods are compared to others. Here’s my completed tier list for anti-cheating approaches:

The first thing to look at are the tiers: S, A, B, C, D, and F. The S-tier is where the most effective anti-cheating approaches live. Why ‘S’? That’s difficult to answer, and a fun rabbit hole to go down, but suffice to say that S is where the most effective approaches are, and F is where the least effective approaches are.

What’s on the tiers and why?

S-tier

Swiss cheese: the layering of multiple different anti-cheating interventions can be more effective than just one intervention

Central teams: dealing with cheating is an expert practice – its own job title these days – so concentrate those experts together and resource them well so they can take the load off everyday academics

Amnesty/self-report: rather than treating every case of cheating through an adversarial pseudo-legal process, we should also allow students to come forward and say “I’ve done something wrong and I’d like to make it right”

Programmatic assessment: zooming out from trying to stop cheating in every individual act of assessment, and instead thinking: how do we secure the degree as a whole from cheating?

A-tier

Tasks students want to do: the rather obvious idea that students might cheat less in tasks they actually want to do

Vivas: having discussions with students about their work can be a great way to understand if they actually did it themselves

Stylometry: a range of emerging technologies that can compare student assignments with their previous work to see if they were likely all written by the same person (hopefully the student)

Document properties: people who investigate cheating cases look for all sorts of signals in document metadata that I don’t want to reveal here – but trust me, they are very useful evidence

Staff training: dealing with cheating is something we can get better at with training, for example, our research has found that people can get more accurate at detecting contract cheating with training

B-Tier

Learning outcomes: think carefully about the outcomes being assessed, and maybe don’t try to assess lower-level outcomes if you don’t need to as they are harder to secure

Proctoring/lockdown: there is strong evidence students score worse grades in remote proctored tests vs unsupervised online tests, which probably means they cheat less – but this needs to be balanced against privacy, surveillance and other concerns

Open book: if you make an exam open book you no longer need to stop people from bringing their notes in, eliminating a type of cheating (and often making for a better exam)

Content-matching: like it or hate it, the introduction of text-matching tools put a big dent on copy-paste plagiarism – though there are concerns around intellectual property and the algorithmization of integrity

Better exam design: a grab bag of clever tricks test designers use that I can’t explain in a sentence but trust me if you do tests you should look it up

Face-to-face exams: these are ok, not great, and likely the site of more cheating than we think, but if you need to assess lower-level learning outcomes they are solid B-tier material

C-tier

Academic integrity modules: yes it’s important we teach students about integrity, but does anybody have evidence it actually reduces rates of cheating? (the answer is no as far as I know)

Honour codes: popular in Northern America, these require students to sign a document saying that they know what cheating and integrity are and that they’ll do the right thing… the problem is that their effects on cheating are quite small

Reflective practice: reflection matters but I’ve heard from a friend that apparently people lie and embellish a lot in these tasks (but of course I’ve never done that)

Legislation: laws that ban cheating sound like a good idea, and they might have some symbolic value, but despite being around since the 70s in some contexts there is no evidence they work (and some evidence they don’t work)

D-tier

Site blocking: while it sounds like a good idea to block access to cheating websites, the problem is that these blocks are super-easy to circumvent for students, and if they also block educators from accessing sites they can be counter-productive

Authentic assessment: I LOVE AUTHENTIC ASSESSMENT AND IT SHOULD BE THE DEFAULT MODE OF ASSESSMENT (ok with that out of the way, let me be controversial: there’s just no evidence authentic assessment has effects at reducing rates of cheating, and there is evidence of industrial-scale cheating in authentic assessment)

F-tier

Unsupervised multiple-choice questions: just don’t use these for high-stakes summative assessment; they are just the site of so much cheating/collusion (but do use them for formative tasks!)

Bans: there was talk about banning essays because they would stop essay mills and somehow miraculously stop cheating… the problem is that contract cheating sites don’t just make essays

Reusing tasks: thanks to sites like Chegg, once an assessment has been set you can assume the question and the answers are public knowledge (do click that Chegg link if you want to cry)

That’s where I’d put things on the list – what about you? If you’d like to revise the list, it’s available as a template on Canva (free account required).

Professor Phillip (Phill) Dawson is the Associate Director of the Centre for Research in Assessment and Digital Learning (CRADLE) at Deakin University. His two latest books are Defending Assessment Security in a Digital World: Preventing E-Cheating and Supporting Academic Integrity in Higher Education (Routledge, 2021) and the co-edited volume Re-imagining University Assessment in a Digital World (Springer, 2020). Phill’s work on cheating is part of his broader research into assessment, which includes work on assessment design and feedback. In his spare time Phill performs improv comedy and produces the academia-themed comedy show The Peer Revue.