Artificial Intelligence and Emerging Technologies in Schools

Five thoughtful ways to approach artificial intelligence in schools

The use of artificial intelligence in schools is the best example we have right now of what we call a sociotechnical controversy. As a result f of political interest in using policy and assessment to steer the work that is being done in schools, partly due to technological advances and partly due to the need for digital learning during COVID lockdowns, controversies have emerged regarding edtech and adaptive technologies in schools. 

An emerging challenge for research has been how to approach these controversies that require technical expertise, domain expertise and assessment expertise to fully grasp the complexity of the problem. That’s what we term a ‘sociotechnical controversy’, an issue or problem where one set of expertise is not enough to fully grasp, and therefore respond, to the issue at hand. A sociotechnical problem requires new methods of engagement, because: 

  1. No one set of expertise or experience is able to fully address the multi-faceted aspects of a sociotechnical controversy.
  2. We need to create opportunities for people to come together, to agree and disagree, to share their experience and to understand the limits of what is possible in a given situation. 
  3. We have to be interested in learning from the past to try to understand what is on the horizon, what should be done and who needs to be made aware of that possible future. In other words, we want to be proactive rather than reactive in the policy space.

We are particularly interested in two phenomena seemingly common in Australian education. The first of these concerns policy, and the ways that governments and government authorities make policy decisions and impose them on schools with little time for consideration, resourcing devoted to professional preparation and awareness of possible unintended consequences. Second, there tends to be a reactive rather than proactive posture with regard to emerging technologies and potential impacts on schools and classrooms. 

This particularly pertains to artificial intelligence (AI) in education, in which sides tend to be drawn over those who proselytize about the benefits of education technology, and those worried about robots replacing teachers. In our minds, the problem of AI in education could be usefully addressed through a focus on the controversy in 2018 regarding the use of automated essay scoring technology, that uses AI, to mark NAPLAN writing assessments. Our focus on this example was because it crystallised much about how AI in schools is understood, is likely to be used in the future and how the impacts that it could have on the profession. 

On July 26 2022, 19 academic and non-academic stakeholders, including psychometricians, policy scholars, teachers, union leaders, system leaders, and computer scientists, gathered at the University of Sydney to discuss the use of Automated Essay Scoring (AES) in education, especially in primary and secondary schooling. The combined expertise of this group spanned: digital assessment, innovation, teaching, psychometrics, policy, assessment, privatisation, learning analytics, data science, automated essay feedback, participatory methodologies, and emerging technologies (including artificial intelligence and machine learning). The workshop adopted a technical democracy approach which aimed not at consensus but productive dialogue through tension. We collectively experimented with AES tools and importantly heard from the profession regarding what they knew would be challenges posed by AES for schools and teachers. Our view was that as AI, and tools such as AES are not going away and are already widely used in countries like the United States, planning for its future use is essential. The group also reinforced that any use of AI in schools should be rolled out in such a way as to place those making decisions in schools, professional educators, at the centre of the process. AI and AES will only be of benefit when they support the profession rather than seek to replace it..

Ultimately, we suggested five key recommendations.

  1. Time and resources have to be devoted to helping professionals understand, and scrutinise, the AI tools being used in their context. 
  2. There needs to be equity in infrastructure and institutional resourcing to enable all institutions the opportunity to engage with the AI tools they see as necessary. We cannot expect individual schools and teachers to overcome inequitable infrastructure such as funding, availability of internet and access to computational hardware. 
  3. Systems that are thinking of using AI tools in schools must prioritise Professional Learning opportunities well in advance of the rollout of any AI tools. This should be not be on top of an already time-poor 
  4. Opportunities need to be created to enable all stakeholders to participate in decision-making regarding AI in schools. It should never be something that is done to schools, but rather supports the work they are doing.
  5. Policy frameworks and communities need to be created that guide how to procure AI tools, when to use AI, how to use AI why schools might choose not to use AI in particular circumstances. 

From working with diverse stakeholders it became clear that the introduction of AES in education should always work to reprofessionalise teaching and must be informed by multiple stakeholder expertise. These discussions should not only include policymakers and ministers across state, territory, and national jurisdictions but must recognise and incorporate the expertise of educators in classrooms and schools. A cooperative process would ensure that diverse stakeholder expertise is integrated across education sector innovation and reforms, such as future AES developments. Educators, policymakers, and EdTech companies must work together to frame the use of AES in schools as it is likely that AES will be adopted over time. There is an opportunity for Australia to lead the way in the collective development of AES guidance, policy, and regulation. 

Link to whitepaper & policy brief. https://www.sydney.edu.au/arts/our-research/centres-institutes-and-groups/sydney-social-sciences-and-humanities-advanced-research-centre/research.html

Greg Thompson is a professor in the Faculty of Creative Industries, Education & Social Justice at the Queensland University of Technology. His research focuses on the philosophy of education and educational theory. He is also interested in education policy, and the philosophy/sociology of education assessment and measurement with a focus on large-scale testing and learning analytics/big data.

Kalervo Gulson is an Australian Research Council Future Fellow (2019-2022). His current research program looks at education governance and policy futures and the life and computing sciences. It investigates whether new knowledge, methods and technologies from life and computing sciences, with a specific focus on Artificial Intelligence, will substantively alter education policy and governance.

Teresa Swist is Senior Research Officer with the Education Futures Studio and Research Associate for the Developing Teachers’ Interdisciplinary Expertise project at the University of Sydney. Her research interests span participatory methodologies, knowledge practices, and emerging technologies. She has a particular interest in how people with diverse expertise can generate ideas, tools, and processes for collective learning and socio-technical change.

Artificial intelligence in Schools: An Ethical Storm is Brewing

Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.’ (Maini and Sabri, 2017, p.3)

Last week the Australian Government Department of Education released the world-first research report into artificial intelligence and emerging technologies in schools. It is authored by an interdisciplinary team from the University of Newcastle, Australia.

As the project lead, and someone interested in carefully incubating emerging technologies in educational settings to develop an authentic evidence-base, I relished the opportunity to explore the often-overlooked ethical aspects of introducing new tech in to schools. To this end, I developed a customised ethical framework designed to encourage critical dialogue and increased policy attention on introducing artificial intelligence into schools.

We used to think artificial intelligence would wheel itself in to classrooms in the sci-fi guise of a trusty robo-instructor (a vision that is unlikely to come true for some time, if ever). What we didn’t envisage was how artificial intelligence would become invisibly infused into the computing applications we use in everyday life such as internet search engines, smartphone assistants, social media tagging and navigation technology, and integrated communication suites.

In this blog post I want to tell you about artificial intelligence in schools, give you an idea of the ethical dilemmas that our educators are facing and introduce you to the framework I developed.

What is AI (artificial intelligence)?

Artificial intelligence is an umbrella term that refers to a machine or computer program that can undertake tasks or activities that require features of human intelligence such as planning, problem solving, recognition of patterns, and logical action.

While the term was first coined in the 1950s, the new millennium marked rapid advancement in AI driven by the expansion of the Internet, availability of ‘Big Data’ and Cloud storage and more powerful computing and algorithms. Applications of AI have benefited from improvements in computer vision, graphics processing, and speech recognition.

Interestingly, adults and children often overestimate the intelligence and capability of machines, so it is important to understand that right now we are in a period of ‘narrow AI’ which is able to do a single or focused task, sometimes in ways that can outperform humans. The diagram below from our report (adapted from an article in The Conversation by Arend Hinz, Michigan State University ‘s Assistant Professor of Integrative Biology & Computer Science and Engineering) provides an overview of types of AI and current state-of-play

AI in education

In education, AI is in some intelligent tutoring systems and powers some pedagogical agents (helpers) in educational software. It can be integrated into the communication suites marketed by Big Tech (for example in email) and will increasingly be part of learning management systems that present predicative and data-driven performance dash boards to teachers and school leaders. There is also some (very concerning) talk of integrating facial recognition technology into classrooms to monitor the ‘mood’ and ‘engagement’ of students despite research suggesting that inferring affective states from facial expression is fraught with difficulties.

Engaging with AI in education also involves an understanding of machine learning (ML), whereby algorithms can help a machine learn to identify patterns in data and make predictions without having pre-programmed models or rules.

Worldwide concern about the ethics of AI and ML

The actual and conceivable ethical implications of AI and ML have been canvassed for several decades. Since 2016, the US, UK and European Union have conducted large scale public inquiries which have grappled with question of what a good and just AI society would look like.

As Umeå University’s Professor of Computing Science, Virginia Dignum, puts it

What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated?’

Most pressing ethical issues for education

Some of the most pressing ethical issues related to AI and ML in general, and especially for education include:

AI bias

AI bias where sexist, racist and other forms of discriminatory assumptions are built into the data sets that are used to train machine-learning algorithms that then become baked into AI systems. Part of the problem is the lack of diversity in the computing profession where those that develop AI systems fail to identify the potential for bias or do not adequately test in different populations across the lifecycle of development.

Black box nature of AI systems

The ’black box’, opaque nature of AI systems is complicated. AI is ‘opaque’ because it is often invisibly infused into computing systems in ways that can influence our interactions, decisions, moods and sense of self without us being aware of this.

The ‘black box’ of AI is twofold:  The proprietary nature of AI products creates a situation where industry does not open up the workings of the product and its algorithms for public or third party scrutiny. In cases of deep machine learning there is an autonomous learning and decision-making process which occurs with minimal human intervention, with this technical process being so complicated that even the computer scientists that have created the program cannot fully explain why the machine came to a decision it did.

Digital human rights issues

Digital human rights issues related to the harvesting the ‘Big Data’ used in ML where humans have not given informed consent or where data is used in ways that were not consented to. Issues of consent and privacy extends to the surreptitious collection, storage and sharing of biometric (of the body) data. Biometric data collection represents a threat to the human right to bodily integrity and is legally considered sensitive data that require a very careful and fully justified position before implementation, especially with vulnerable populations such as children.

Deep fakes

We are in a world of ‘deep fakes’ and AI-produced media that ordinary (and even technologist) humans cannot discern as real or machine-generated. This represents a serious challenge and interesting opportunities to teaching and practicing digital literacy. There are even AI programs that produce more than passable written work on any topic.

The potential for a lack of independent advice for educational leaders making decisions on use of AI and ML

Regulatory capture is where those in policy and governance positions (including principals) become dependent on potentially conflicted commercial interests for advice on AI-powered products. While universities may have in-house expertise or the resources to buy-in independent expertise to assess AI products, principals making procurement decisions will probably not be able to do this. Furthermore, it is incumbent on educational bureaucracies to seek independent expert advice and be transparent in their policies and decision-making regarding such procurement so that school communities can have trust that the technology will not do harm through biased systems or by violating teacher and students sovereignty of their data and privacy. 

Our report offers practical advice

Along with our report, the project included infographics on Artificial Intelligence and virtual and augmented reality for students, and ‘short read’ literature reviews for teachers.

In the report we carefully unpack the multi-faceted ethical dimensions of AI and ML for education systems and offer the customised Education, Ethics and AI (EEAI) framework  (below) for teachers, school leaders and policy-makers so that they can make informed decisions regarding design, implementation and governance of AI-powered systems. We also offer a practical ‘worked example’ of how to apply it.

While it is not possible to unpack it all in a blog post, we hope Australian educators can use the report to lead the way in using AI-powered systems for good and for what they are good for.

We want to avoid teachers and students using AI-systems that ‘feel more and more like magic’ and where educators are unable to explain why a machine made a decision that it did in relation to student learning. The very basis of education is being able to make ‘fair calls’ and to transparently explain educational action and, importantly, to be accountable for these decisions.

When we lose sight of this, at a school or school-systems level, we find ourselves in questionable ethical and educational territory. Let’s not disregard our core strength as educators in a rush to appear to be innovative.

We are confident that our report is a good first step in prompting an ongoing democratic process to grapple with ethical issues of AI and ML so that school communities can weather the approaching storm.

Erica Southgate is an Associate Professor of Education at the University of Newcastle, Australia. She believes everyone has the potential to succeed, and that digital technology can be used to level the playing field of privilege. Currently she is using immersive virtual reality (VR) to create solutions to enduring educational and social problems. She believes playing and experimenting with technology can build a strong skill and mind set and every student, regardless of their economic situation, should have access to amazing technology for learning. Erica is lead researcher on the VR School Research Project, a collaboration with school communities. Erica has produced ethical guidelines and health and safety resources for teachers so that immersive VR can be used for good in schools. She can be found on Twitter @EricaSouthgate

For those interested in the full report: Artificial Intelligence and Emerging Technologies in Schools