Sarah Jefferson

A new sheriff is coming to the wild ChatGPT west

You know something big is happening when the CEO of Open AI, the creators of ChatGPT, starts advocating for “regulatory guardrails”. Sam Altman testified to the US Senate Judiciary Committee this week that the potential risks for misuse are significant, echoing other recent calls by former Google pioneer, the so-called “godfather of AI”, Geoffrey Hinton.

In contrast, teachers continue to be bombarded with a dazzling array of possibilities, seemingly without limit – the great plains and prairies of the AI “wild west”! One estimate recently made the claim “that around 2000 new AI tools were launched in March” alone!

Given teachers across the globe are heading into end of semester, or end of academic year, assessment and reporting, the sheer scale of new AI tools is a stark reminder that learning, teaching, assessment, and reporting are up for serious discussion in the AI hyper-charged world of 2023. Not even a pensive CEO’s reflection or an engineer’s growing concern has tempered expansion.

Until there is some regulation, proliferation of AI tools –  and voices spruiking their merits – will continue unabated. Selecting and integrating AI tools will remain contextual and evaluative work, regardless of regulation. Where does this leave schoolteachers and tertiary academics, and how do we do this with 2000 new tools in one month (is it even possible)?!?!

Some have jumped for joy and packed their bags for new horizons; some have recoiled in terror and impotence, bunkering down in their settled pedagogical “back east”. 

As if this was not enough to deal with, Columbia University undergraduate, Owen Terry, last week staked the claim that students are not using ChatGPT for “writing our essays for us”. Rather, they are breaking down the task into components, asking ChatGPT to analyse and predict suggestions for each component. They then use ideas suggested by ChatGPT to “modify the structure a bit where I deemed the computer’s reasoning flawed or lackluster”. He argues this makes detection of using ChatGPT “simply impossible”. 

It seems students are far savvier about how they use AI in education than we might give them credit, suggests Terry. They are not necessarily looking for the easy route but are engaging with the technology to enhance their understanding and express their ideas. They’re not looking to cheat, just collate ideas and information more efficiently.

Terry challenges us as educators and researchers to think that we might be underestimating the ethical desire for students to be more broadly educated, rather than automatons serving up predictive banality. His searing critique with how we are dealing with our “tools” is blunt – “very few people in power even understand that something is wrong…we’re not being forced to think anymore”. Perhaps contrary to how some might view the challenge, Terry suggests we might even:

need to move away from the take-home essay…and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.

The urgency of “what do I do with the 2000 new AI apps” seems even greater. These are only the ones released during March. Who knows how many will spring up this month, or next, or by the end of 2023? Who knows how long it will take partisan legislators to act, or what they will come up with in response? Until then, we have to make our own map.

Some have offered a range of educational maps based on alliterative Cs – 4Cs, 6Cs – so here’s a new 4Cs about how we might use AI effectively while we await legislators’ deliberations:

Curation – pick and choose apps which seem to serve the purpose of student learning. Avoid popularity or novelty for its own sake. In considering what this looks like in practice, it is useful to consider the etymology of the word curation which comes from the Latin word, cura, ‘to take care of.’ Indeed, if our primary charge is to educate from a holistic perspective, then consideration must be extended to our choice of AI or apps that will serve their learning needs and engagement.

The fostering of innate curiosity means being unafraid to trial things for ourselves and with and for our students. But this should not be to the detriment of the intended learning outcomes, rather to ensure they align more closely. When curating AI, be discerning in whether it adds to the richness of student learning.

Clarity – identify for students (and teachers) why any chosen app has educative value. It’s the elevator pitch of 2023 – if you can’t explain to students its relevance in 30 seconds, it’s a big stretch to ask them to be interested. With 2000 new offerings in March alone, the spectres of cognitive load theory and job demands-resources theory loom large.

Competence – don’t ask students to use it if you haven’t explored it sufficiently. Maslow’s wisdom on “having a hammer and seeing every problem as a nail”  resonates here. Having a hammer might mean I only see problems as nails, but at least it helps if I know how to use the hammer properly! After all, how many educators really optimise the power, breadth, and depth of Word or Excel…and they’ve been around for a few years now. The rapid proliferation makes developing competence in anything more than just a few key tools quite unrealistic. Further, it is already clear that skills in prompt engineering need to develop more fully in order to maximise AI usefulness. 

Character – Discussions around AI ethical concerns—including bias in datasets, discriminatory output, environmental costs, and academic integrity—can shape a student’s character and their approach to using AI technologies. Understanding the biases inherent in AI datasets helps students develop traits of fairness and justice, promoting actions that minimise harm. Comprehending the environmental impact of AI models fosters responsibility and stewardship, and may lead to both conscientious use and improvements in future models. Importantly for education, tackling academic integrity heightens students’ sense of honesty, accountability, and respect for others’ work. Students have already risen to the occasion, with local and international research capturing student concerns and their beliefs about the importance of learning to use these technologies ethically and responsibly. Holding challenging conversations about AI ethics prepares students for ethically complex situations, fostering the character necessary in the face of these technologies.

Launching these 4Cs is offered in the spirit of the agile manifesto undergirding development of software over the last twenty years – early and continuous delivery and deliver working software frequently. The rapid advance from ChatGPT3, to 3.5, and to 4 shows the manifesto remains a potent rallying call. New iterations of these 4Cs for AI should similarly invite critique, refinement, and improvement.

L to R: Dr Paul Kidson is Senior Lecturer in Educational Leadership at the Australian Catholic University, Dr Sarah Jefferson is Senior Lecturer in Education at Edith Cowan University, Leon Furze is a PhD student at Deakin University researching the intersection of AI and education.