Leon Furze

Does the new AI Framework serve schools or edtech?

On 30 November, 2023, the Australian federal government released its Australian Framework for Generative AI in Schools. This is an important step forward. It provides much-needed advice for schools following the November 2022 release of ChatGPT, a technological product capable of creating human-like text and other content. This Framework has undergone several rounds of consultation across the education sector. The Framework does important work in acknowledging opportunities while also foregrounding the importance of human wellbeing, privacy, security and safety.

Out of date already? 

However, in this fast-moving space, the policy may already be out of date. Following early enthusiasm (despite a ban in many schools), the hype around generative AI in education is shifting. As experts in generative AI in education,researching it for some years now, we have moved to a much more cautious stance. A recent UNESCO article stated that “AI must be kept in check in schools”. The challenges in using generative AI safely and ethically, for human flourishing, are increasingly becoming apparent.

Some questions and suggestions

In this article, we suggest some of the ways that the policy already needs to be updated and improved to better reflect emerging understandings of generative AI’s threats and limitations. With a 12-month review cycle, teachers may find the Framework provides less policy support than hoped. We also wonder to what extent the educational technology industry’s influence has affected the tone of this policy work.

What is the Framework?

The Framework addresses six “core principles” of generative AI in education: Teaching and Learning; Human and Social Wellbeing; Transparency; Fairness, Accountability; and Privacy, Security and Safety. It provides guiding statements under each concept. However, some of these concepts are much less straightforward than the Framework suggests.

Problems with generative AI

Over time, users have become increasingly aware that generative AI does not provide reliable information. It is inherently biased, through the biased material it has “read” in its training. It is prone to data leaks and malfunctions. Its workings cannot be readily perceived or understood by its own makers and vendors; it is therefore not transparent. It is the subject of global claims of copyright infringement in its development and use. It is vulnerable to power and broadband outages, suggesting the dangers of developing reliance on it for composing content.

Impossible expectations

The Framework may therefore have expectations of schools and teachers that are impossible to fulfil. It suggests schools and teachers can use tools that are inherently flawed, biased, mysterious and insecure, in ways that are sound, un-biased, transparent and ethical. If teachers feel their heads are spinning on reading the Framework, it is not surprising! Creators of the Framework need to interrogate their own assumptions, for example that “safe” and “high quality” generative AI exists, and who these assumptions serve.

As a policy document, the Framework also puts an extraordinary onus on schools and teachers to do high-stakes work for which they may not be qualified (such as conducting risk assessments of algorithms), or that they do not have time or funding to complete. The latter include designing appropriate learning experiences, revising assessments, consulting with communities, learning about and applying intellectual property rights and copyright law and becoming expert in the use of generative AI. It is not clear how this can possibly be achieved within existing workloads, and when the nature and ethics of generative AI are complex and contested.

What needs to change in the next iteration?

  1. A better definition: At the outset, the definition of generative AI needs to acknowledge that it is, in most cases, a proprietary tool that may involve the extraction of school and student data. 
  2. A more honest stance on generative AI: As a tool, generative AI is deeply flawed. As computer scientist Deborah Raji says, experts need to stop talking about it “as if it works”. The Framework misunderstands that generative AI is always biased, in that it is trained on limited datasets and with motivated “guardrails” created largely by white, male and United States-based developers. For example, a current version of ChatGPT does not speak in or use Australian First Nations words, for valid reasons related to the integrity of cultural knowledges. However, this indicates the whiteness of its “voice” and the problems inherent in requiring students to use or rely on this “voice”. The “potential” bias mentioned in the Framework would be better framed as “inevitable”. Policy also needs to acknowledge that generative AI is already creating profound harms, for example to children, to students, and to climate through its unsustainable environmental impacts.
  3. A more honest stance on edtech and the digital divide: A recent UNESCO report has confirmed there is little evidence of any improvement to learning from the use of digital technology in classrooms over decades. The use of technology does not automatically improve teaching and learning. This honest stance also needs to acknowledge that there is an existing digital divide related to basic technological access (to hardware, software and connectivity) that means that students will not have equitable experiences of generative AI from the outset.
  4. Evidence: Education is meant to be evidence-informed. Given there is little research that demonstrates the benefits of generative AI use in education, but research does show the harms of algorithms, policymakers and educators should proceed with caution. Schools need support to develop processes and procedures to monitor and evaluate the use of generative AI by both staff and students. This should not be a form of surveillance, but rather take the form of teacher-led action research, to provide future high-quality and deeply contextual evidence. 
  5. Locating policy in existing research: This policy has missed an opportunity to connect to extensive policy, theory, research and practice around digital literacies since the 1990s, especially in English and literacy education, so that all disciplines could benefit from this. The policy has similarly missed an opportunity to foreground how digital AI-literacies need to be embedded across the curriculum, supported by relevant existing Frameworks, such as the Literacy in 3D model (developed for cross curricular work), with its focus on operational, cultural and critical dimensions of any technological literacy. Another key concept from digital literacies is the need to learn “with” and “about” generative AI. Education policy needs to reference educational concepts, principles and issues, also including automated essay scoring, learning styles, personalised learning, machine instruction and so on, with a glossary of terms.
  6. Acknowledging the known dangers of bots: It would also be useful for policy to be framed by long-standing research that demonstrates the dangers of chatbots, and their compelling capacity to shut down human creativity and criticality and suggest ways to mitigate these effects from the outset. This is particularly important given the threats to democracy posed by misinformation and disinformation generated at scale by humans using generative AI. 
  7. Teacher transparency: All use of generative AI in schools needs to be disclosed. The use of generative AI by staff in the preparation of teaching materials and the planning of lessons needs to be disclosed to management, peers, students and families. The Framework seems to focus on students and their activities, whereas “academic integrity” needs to be modelled first by teachers and school leaders. Trust and investment in meaningful communication depend on readers knowing the sources of content, or cynicism may result. This disclosure is also necessary to monitor and manage the threat to teacher professionalism through the replacement of teacher intellectual labour by generative AI.
  8. Stronger acknowledgement of teacher expertise: Teachers are experts in more than just subject matter. They are expert in the pedagogical content knowledge of their disciplines, or how to teach those disciplines. They are also expert in their contexts, and in their students’ needs. Policy needs to support education in countering the rhetoric of edtech that teachers need to be removed or replaced by generative AI and remain only in support roles. The complex profession of teaching, based in relationality and community, needs to be elevated, not relegated to “knowing stuff about content”. 
  9. Leadership around ethical assessment: OpenAI made a clear statement in 2023 that generative AI should not be used for summative assessment, and that this should be done by humans. It is unfortunate the Australian government did not reinforce this advice at a national policy level, to uphold the rights of students and protect the intellectual labour of teachers.
  10. More detail: While acknowledging this is a high-level policy document and Framework, we call for more detail to assist the implementation of policy in schools. Given the aim of “defining what safe, ethical and responsible use of generative AI should look like” the document would benefit from more detail; a related  education document from the US runs to 67 pages.

A radical policy imagination

At the 2023 Australian Association for Research in Education (AARE) conference, Jane Kenway encouraged participants to develop radical research imaginations. The extraordinary impacts of generative AI require a radical policy imagination, rather than timid or bland statements balancing opportunities and threats. It is increasingly clear that the threats cannot readily be dealt with by schools. The recent thoughts of UNESCO’s Assistant Director-General for Education on generative AI are sobering.

A significant part of this policy imagination needs to find the financial and other resources to support slow and safe implementation. It also needs to acknowledge, at the highest possible level, that if you identify as female, if you are a First Nations Australian, indeed, if you are anything other than white, male, affluent, able-bodied, heterosexual and compliant with multiple other norms of “mainstream” society, it is highly likely that generative AI does not speak for you. Policy must define a role for schools in developing students who can shape a more just future generative AI, not just use existing tools effectively.

Who is in charge . . . and who benefits?

Policy needs to enable and elevate the work of teachers and education researchers around generative AI, and the work of the education discipline overall, to contribute to raising the status of teachers. We look forward to some of the above suggestions being taken up in future iterations of the Framework. We also hope that all future work in this area will be led by teachers, not merely involve consultation with them. This includes the forthcoming work by Education Services Australia on evaluating generative AI tools. We trust that no staff or consultants on that project will have any links whatsoever to the edtech, or broader technology industries. This is the kind of detail that may help the general public decide exactly who educational policy serves.

Generative AI was not used at any stage in the writing of this article.

The header image was definitely produced using Generative AI.

An edited and shorter version of this piece appeared in The Conversation.

Lucinda McKnight is an Australian Research Council Senior Research Fellow in the Research for Educational Impact Centre at Deakin University, undertaking a national study into the teaching of writing with generative AI. Leon Furze is a PhD Candidate at Deakin University studying the implications of Generative Artificial Intelligence in education, particularly for teachers of writing. Leon blogs about Generative AI, reading and writing.

A new sheriff is coming to the wild ChatGPT west

You know something big is happening when the CEO of Open AI, the creators of ChatGPT, starts advocating for “regulatory guardrails”. Sam Altman testified to the US Senate Judiciary Committee this week that the potential risks for misuse are significant, echoing other recent calls by former Google pioneer, the so-called “godfather of AI”, Geoffrey Hinton.

In contrast, teachers continue to be bombarded with a dazzling array of possibilities, seemingly without limit – the great plains and prairies of the AI “wild west”! One estimate recently made the claim “that around 2000 new AI tools were launched in March” alone!

Given teachers across the globe are heading into end of semester, or end of academic year, assessment and reporting, the sheer scale of new AI tools is a stark reminder that learning, teaching, assessment, and reporting are up for serious discussion in the AI hyper-charged world of 2023. Not even a pensive CEO’s reflection or an engineer’s growing concern has tempered expansion.

Until there is some regulation, proliferation of AI tools –  and voices spruiking their merits – will continue unabated. Selecting and integrating AI tools will remain contextual and evaluative work, regardless of regulation. Where does this leave schoolteachers and tertiary academics, and how do we do this with 2000 new tools in one month (is it even possible)?!?!

Some have jumped for joy and packed their bags for new horizons; some have recoiled in terror and impotence, bunkering down in their settled pedagogical “back east”. 

As if this was not enough to deal with, Columbia University undergraduate, Owen Terry, last week staked the claim that students are not using ChatGPT for “writing our essays for us”. Rather, they are breaking down the task into components, asking ChatGPT to analyse and predict suggestions for each component. They then use ideas suggested by ChatGPT to “modify the structure a bit where I deemed the computer’s reasoning flawed or lackluster”. He argues this makes detection of using ChatGPT “simply impossible”. 

It seems students are far savvier about how they use AI in education than we might give them credit, suggests Terry. They are not necessarily looking for the easy route but are engaging with the technology to enhance their understanding and express their ideas. They’re not looking to cheat, just collate ideas and information more efficiently.

Terry challenges us as educators and researchers to think that we might be underestimating the ethical desire for students to be more broadly educated, rather than automatons serving up predictive banality. His searing critique with how we are dealing with our “tools” is blunt – “very few people in power even understand that something is wrong…we’re not being forced to think anymore”. Perhaps contrary to how some might view the challenge, Terry suggests we might even:

need to move away from the take-home essay…and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.

The urgency of “what do I do with the 2000 new AI apps” seems even greater. These are only the ones released during March. Who knows how many will spring up this month, or next, or by the end of 2023? Who knows how long it will take partisan legislators to act, or what they will come up with in response? Until then, we have to make our own map.

Some have offered a range of educational maps based on alliterative Cs – 4Cs, 6Cs – so here’s a new 4Cs about how we might use AI effectively while we await legislators’ deliberations:

Curation – pick and choose apps which seem to serve the purpose of student learning. Avoid popularity or novelty for its own sake. In considering what this looks like in practice, it is useful to consider the etymology of the word curation which comes from the Latin word, cura, ‘to take care of.’ Indeed, if our primary charge is to educate from a holistic perspective, then consideration must be extended to our choice of AI or apps that will serve their learning needs and engagement.

The fostering of innate curiosity means being unafraid to trial things for ourselves and with and for our students. But this should not be to the detriment of the intended learning outcomes, rather to ensure they align more closely. When curating AI, be discerning in whether it adds to the richness of student learning.

Clarity – identify for students (and teachers) why any chosen app has educative value. It’s the elevator pitch of 2023 – if you can’t explain to students its relevance in 30 seconds, it’s a big stretch to ask them to be interested. With 2000 new offerings in March alone, the spectres of cognitive load theory and job demands-resources theory loom large.

Competence – don’t ask students to use it if you haven’t explored it sufficiently. Maslow’s wisdom on “having a hammer and seeing every problem as a nail”  resonates here. Having a hammer might mean I only see problems as nails, but at least it helps if I know how to use the hammer properly! After all, how many educators really optimise the power, breadth, and depth of Word or Excel…and they’ve been around for a few years now. The rapid proliferation makes developing competence in anything more than just a few key tools quite unrealistic. Further, it is already clear that skills in prompt engineering need to develop more fully in order to maximise AI usefulness. 

Character – Discussions around AI ethical concerns—including bias in datasets, discriminatory output, environmental costs, and academic integrity—can shape a student’s character and their approach to using AI technologies. Understanding the biases inherent in AI datasets helps students develop traits of fairness and justice, promoting actions that minimise harm. Comprehending the environmental impact of AI models fosters responsibility and stewardship, and may lead to both conscientious use and improvements in future models. Importantly for education, tackling academic integrity heightens students’ sense of honesty, accountability, and respect for others’ work. Students have already risen to the occasion, with local and international research capturing student concerns and their beliefs about the importance of learning to use these technologies ethically and responsibly. Holding challenging conversations about AI ethics prepares students for ethically complex situations, fostering the character necessary in the face of these technologies.

Launching these 4Cs is offered in the spirit of the agile manifesto undergirding development of software over the last twenty years – early and continuous delivery and deliver working software frequently. The rapid advance from ChatGPT3, to 3.5, and to 4 shows the manifesto remains a potent rallying call. New iterations of these 4Cs for AI should similarly invite critique, refinement, and improvement.

L to R: Dr Paul Kidson is Senior Lecturer in Educational Leadership at the Australian Catholic University, Dr Sarah Jefferson is Senior Lecturer in Education at Edith Cowan University, Leon Furze is a PhD student at Deakin University researching the intersection of AI and education.