AI in schools

To understand AI today, we need both why and how

We know AI is such a big deal that just this week the President of the United States, Joe Biden, signed an executive order to try to address the risks of a technology he described as “the most consequential technology of our time”.

So it is no wonder that the proliferation of both AI tools and of conferences during 2023 continues unabated.

And how seriously are we taking the challenge of AI in Australia? Our focus is disproportionately focused on “how”, while larger questions of “why” seem opaque. 

Now is a good time to reflect on where we are with AI. We might now have much greater capacity to generate data, but whether this is leading to knowledge, let alone wisdom, is up for serious debate.

A time to reflect

The number of AI tools and their applications to education is overwhelming, and certainly way beyond initial angst about ChatGPT and cheating that set the tone for the start of the 2023 academic year. 

But, as Maslow once wisely mused, only having a hammer makes us see every problem as a nail. If we have these powerful technologies, knowing how to use them can’t be the only issue. We need to talk more about why and when we use them. This goes to the heart of what we hold as the purposes of education. 

The case of the smartphone provides a useful comparison. First launched in 1992, it took until 2007 for the iPhone to disrupt the technology conversation. Some dreamed of, and seized, the opportunities in education such a device enabled. Others exercised caution, waiting to follow the early adopters only once the path was cleared.

UNESCO advice

Sixteen years later, though, responses have sharpened. UNESCO recently advised that smartphones should only be used where they benefit learning, advice that admittedly seems self-evident. It has taken so long for such a statement to emerge, though, it suggests the “tool” is having ongoing impacts well beyond learning. Sadly, too many examples from schools attest to the harnessing of smartphone power for abusive and manipulative purposes, particularly with sexual violence. The rise of AI has only exacerbated some of these concerns.

The potent combination of learning disengagement and social dysfunction continues to create challenges for how technology is used in schools. There is a rising chorus in support of more handwriting. Some jurisdictions have moved to wholesale banning of mobile phones at school

How we’ve dealt with smartphones should give us pause for reflection, particularly when some early warning signs about AI are clearly evident. 

When AI whistleblower, Timnit Gebru, first started in AI research, she lamented the lack of cultural and gender diversity amongst developers. Things have improved, no doubt, but cultural and social bias remain significant problems to be addressed.

Flat-footed prose

Much lauded creative possibilities of generative AI are still needing development, and also come with serious ethical questions. Margaret Atwood recently lamented the lack of creative artistry of outputs based on her own works, concluding that its “flat-footed prose was the opposite of effective storytelling”. 

Worse, she argued, was that the texts used to train these models were not even purchased by the company, instead relying on versions scraped – stolen – from the internet. That, in turn, meant any royalty payments she might otherwise have earned were withheld. Australian authors have similarly expressed their frustration. Eking out an existence as an author is challenging enough without pirated works further stealing from these vital cultural voices.

We seem to have a larger challenge, too, buried deep in little discussed PISA data. Much of the focus on PISA is about test results.

Sobering results

But here’s what is in Volume III : students’ perceptions about bigger existential questions on the meaning of life, purpose, and satisfaction. The results, all of which are below the OECD average, are sobering:

  • 37% of students disagreed or strongly disagreed that “my life has meaning and purpose”;
  • 42% of students disagreed or strongly disagreed that “I have discovered a satisfactory meaning in life”;
  • 36% of students disagreed or strongly disagreed that “I have a clear sense of what gives meaning to my life”.

And this data was collected before the traumas of Black Summer in 2019 and COVID-19. There is much anticipation about what story the more recent round of PISA data collection will tell.

Based on this data, we clearly have much more work to do on our second national educational goal to develop confident and creative individuals who “have a sense of self-worth, self-awareness and personal identity that enables them to manage their emotional, mental, cultural, spiritual and physical wellbeing”. 

What can AI do in pursuit of these goals?

Much of the conversation about AI has been focused on the first part of the first national educational goal – excellence. How can AI be used to improve student learning? How can AI reshape teaching and assessment? More remains to be done on how AI can address the second part – equity.

These concerns are echoed by UNESCO in its recent Global Education Monitoring Report. The opportunities afforded by AI raise new questions about what it means to be educated. Technology is the tool, not the goal, argues the report. AI is to be in the service of developing “learners’ responsibility, empathy, moral compass, creativity and collaboration”.

AI will no doubt bring new possibilities and efficiencies into education, and to that end should be embraced. At the same time, a better test for its value might be that posed recently by Gert Biesta, that we must not:

lose sight of the fact that children and young people are human beings who face the challenge of living their own life, and of trying to live it well.

Attraction to the new, the shiny, the ephemeral, the how, is to be tempered by more fundamental questions of why. Keeping this central to the conversation might prevent us from realising Arendt’s prophecy that our age may exhibit “the deadliest, most sterile passivity history has ever known”.

Dr Paul Kidson is a senior lecturer in Educational Leadership at the Australian Catholic University. Prior to becoming an academic in 2017, he was a school principal for over 11 years. His teaching and research explore how systems and policies govern the work of school leaders, as well as how school leaders develop and sustain their personal leadership story. He previously wrote about artificial intelligence for EduResearch Matters with Sarah Jefferson and Leon Furze here.

Artificial intelligence in Schools: An Ethical Storm is Brewing

Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.’ (Maini and Sabri, 2017, p.3)

Last week the Australian Government Department of Education released the world-first research report into artificial intelligence and emerging technologies in schools. It is authored by an interdisciplinary team from the University of Newcastle, Australia.

As the project lead, and someone interested in carefully incubating emerging technologies in educational settings to develop an authentic evidence-base, I relished the opportunity to explore the often-overlooked ethical aspects of introducing new tech in to schools. To this end, I developed a customised ethical framework designed to encourage critical dialogue and increased policy attention on introducing artificial intelligence into schools.

We used to think artificial intelligence would wheel itself in to classrooms in the sci-fi guise of a trusty robo-instructor (a vision that is unlikely to come true for some time, if ever). What we didn’t envisage was how artificial intelligence would become invisibly infused into the computing applications we use in everyday life such as internet search engines, smartphone assistants, social media tagging and navigation technology, and integrated communication suites.

In this blog post I want to tell you about artificial intelligence in schools, give you an idea of the ethical dilemmas that our educators are facing and introduce you to the framework I developed.

What is AI (artificial intelligence)?

Artificial intelligence is an umbrella term that refers to a machine or computer program that can undertake tasks or activities that require features of human intelligence such as planning, problem solving, recognition of patterns, and logical action.

While the term was first coined in the 1950s, the new millennium marked rapid advancement in AI driven by the expansion of the Internet, availability of ‘Big Data’ and Cloud storage and more powerful computing and algorithms. Applications of AI have benefited from improvements in computer vision, graphics processing, and speech recognition.

Interestingly, adults and children often overestimate the intelligence and capability of machines, so it is important to understand that right now we are in a period of ‘narrow AI’ which is able to do a single or focused task, sometimes in ways that can outperform humans. The diagram below from our report (adapted from an article in The Conversation by Arend Hinz, Michigan State University ‘s Assistant Professor of Integrative Biology & Computer Science and Engineering) provides an overview of types of AI and current state-of-play

AI in education

In education, AI is in some intelligent tutoring systems and powers some pedagogical agents (helpers) in educational software. It can be integrated into the communication suites marketed by Big Tech (for example in email) and will increasingly be part of learning management systems that present predicative and data-driven performance dash boards to teachers and school leaders. There is also some (very concerning) talk of integrating facial recognition technology into classrooms to monitor the ‘mood’ and ‘engagement’ of students despite research suggesting that inferring affective states from facial expression is fraught with difficulties.

Engaging with AI in education also involves an understanding of machine learning (ML), whereby algorithms can help a machine learn to identify patterns in data and make predictions without having pre-programmed models or rules.

Worldwide concern about the ethics of AI and ML

The actual and conceivable ethical implications of AI and ML have been canvassed for several decades. Since 2016, the US, UK and European Union have conducted large scale public inquiries which have grappled with question of what a good and just AI society would look like.

As Umeå University’s Professor of Computing Science, Virginia Dignum, puts it

What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated?’

Most pressing ethical issues for education

Some of the most pressing ethical issues related to AI and ML in general, and especially for education include:

AI bias

AI bias where sexist, racist and other forms of discriminatory assumptions are built into the data sets that are used to train machine-learning algorithms that then become baked into AI systems. Part of the problem is the lack of diversity in the computing profession where those that develop AI systems fail to identify the potential for bias or do not adequately test in different populations across the lifecycle of development.

Black box nature of AI systems

The ’black box’, opaque nature of AI systems is complicated. AI is ‘opaque’ because it is often invisibly infused into computing systems in ways that can influence our interactions, decisions, moods and sense of self without us being aware of this.

The ‘black box’ of AI is twofold:  The proprietary nature of AI products creates a situation where industry does not open up the workings of the product and its algorithms for public or third party scrutiny. In cases of deep machine learning there is an autonomous learning and decision-making process which occurs with minimal human intervention, with this technical process being so complicated that even the computer scientists that have created the program cannot fully explain why the machine came to a decision it did.

Digital human rights issues

Digital human rights issues related to the harvesting the ‘Big Data’ used in ML where humans have not given informed consent or where data is used in ways that were not consented to. Issues of consent and privacy extends to the surreptitious collection, storage and sharing of biometric (of the body) data. Biometric data collection represents a threat to the human right to bodily integrity and is legally considered sensitive data that require a very careful and fully justified position before implementation, especially with vulnerable populations such as children.

Deep fakes

We are in a world of ‘deep fakes’ and AI-produced media that ordinary (and even technologist) humans cannot discern as real or machine-generated. This represents a serious challenge and interesting opportunities to teaching and practicing digital literacy. There are even AI programs that produce more than passable written work on any topic.

The potential for a lack of independent advice for educational leaders making decisions on use of AI and ML

Regulatory capture is where those in policy and governance positions (including principals) become dependent on potentially conflicted commercial interests for advice on AI-powered products. While universities may have in-house expertise or the resources to buy-in independent expertise to assess AI products, principals making procurement decisions will probably not be able to do this. Furthermore, it is incumbent on educational bureaucracies to seek independent expert advice and be transparent in their policies and decision-making regarding such procurement so that school communities can have trust that the technology will not do harm through biased systems or by violating teacher and students sovereignty of their data and privacy. 

Our report offers practical advice

Along with our report, the project included infographics on Artificial Intelligence and virtual and augmented reality for students, and ‘short read’ literature reviews for teachers.

In the report we carefully unpack the multi-faceted ethical dimensions of AI and ML for education systems and offer the customised Education, Ethics and AI (EEAI) framework  (below) for teachers, school leaders and policy-makers so that they can make informed decisions regarding design, implementation and governance of AI-powered systems. We also offer a practical ‘worked example’ of how to apply it.

While it is not possible to unpack it all in a blog post, we hope Australian educators can use the report to lead the way in using AI-powered systems for good and for what they are good for.

We want to avoid teachers and students using AI-systems that ‘feel more and more like magic’ and where educators are unable to explain why a machine made a decision that it did in relation to student learning. The very basis of education is being able to make ‘fair calls’ and to transparently explain educational action and, importantly, to be accountable for these decisions.

When we lose sight of this, at a school or school-systems level, we find ourselves in questionable ethical and educational territory. Let’s not disregard our core strength as educators in a rush to appear to be innovative.

We are confident that our report is a good first step in prompting an ongoing democratic process to grapple with ethical issues of AI and ML so that school communities can weather the approaching storm.

Erica Southgate is an Associate Professor of Education at the University of Newcastle, Australia. She believes everyone has the potential to succeed, and that digital technology can be used to level the playing field of privilege. Currently she is using immersive virtual reality (VR) to create solutions to enduring educational and social problems. She believes playing and experimenting with technology can build a strong skill and mind set and every student, regardless of their economic situation, should have access to amazing technology for learning. Erica is lead researcher on the VR School Research Project, a collaboration with school communities. Erica has produced ethical guidelines and health and safety resources for teachers so that immersive VR can be used for good in schools. She can be found on Twitter @EricaSouthgate

For those interested in the full report: Artificial Intelligence and Emerging Technologies in Schools