AI in schools

Artificial intelligence in Schools: An Ethical Storm is Brewing

Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.’ (Maini and Sabri, 2017, p.3)

Last week the Australian Government Department of Education released the world-first research report into artificial intelligence and emerging technologies in schools. It is authored by an interdisciplinary team from the University of Newcastle, Australia.

As the project lead, and someone interested in carefully incubating emerging technologies in educational settings to develop an authentic evidence-base, I relished the opportunity to explore the often-overlooked ethical aspects of introducing new tech in to schools. To this end, I developed a customised ethical framework designed to encourage critical dialogue and increased policy attention on introducing artificial intelligence into schools.

We used to think artificial intelligence would wheel itself in to classrooms in the sci-fi guise of a trusty robo-instructor (a vision that is unlikely to come true for some time, if ever). What we didn’t envisage was how artificial intelligence would become invisibly infused into the computing applications we use in everyday life such as internet search engines, smartphone assistants, social media tagging and navigation technology, and integrated communication suites.

In this blog post I want to tell you about artificial intelligence in schools, give you an idea of the ethical dilemmas that our educators are facing and introduce you to the framework I developed.

What is AI (artificial intelligence)?

Artificial intelligence is an umbrella term that refers to a machine or computer program that can undertake tasks or activities that require features of human intelligence such as planning, problem solving, recognition of patterns, and logical action.

While the term was first coined in the 1950s, the new millennium marked rapid advancement in AI driven by the expansion of the Internet, availability of ‘Big Data’ and Cloud storage and more powerful computing and algorithms. Applications of AI have benefited from improvements in computer vision, graphics processing, and speech recognition.

Interestingly, adults and children often overestimate the intelligence and capability of machines, so it is important to understand that right now we are in a period of ‘narrow AI’ which is able to do a single or focused task, sometimes in ways that can outperform humans. The diagram below from our report (adapted from an article in The Conversation by Arend Hinz, Michigan State University ‘s Assistant Professor of Integrative Biology & Computer Science and Engineering) provides an overview of types of AI and current state-of-play

AI in education

In education, AI is in some intelligent tutoring systems and powers some pedagogical agents (helpers) in educational software. It can be integrated into the communication suites marketed by Big Tech (for example in email) and will increasingly be part of learning management systems that present predicative and data-driven performance dash boards to teachers and school leaders. There is also some (very concerning) talk of integrating facial recognition technology into classrooms to monitor the ‘mood’ and ‘engagement’ of students despite research suggesting that inferring affective states from facial expression is fraught with difficulties.

Engaging with AI in education also involves an understanding of machine learning (ML), whereby algorithms can help a machine learn to identify patterns in data and make predictions without having pre-programmed models or rules.

Worldwide concern about the ethics of AI and ML

The actual and conceivable ethical implications of AI and ML have been canvassed for several decades. Since 2016, the US, UK and European Union have conducted large scale public inquiries which have grappled with question of what a good and just AI society would look like.

As Umeå University’s Professor of Computing Science, Virginia Dignum, puts it

What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated?’

Most pressing ethical issues for education

Some of the most pressing ethical issues related to AI and ML in general, and especially for education include:

AI bias

AI bias where sexist, racist and other forms of discriminatory assumptions are built into the data sets that are used to train machine-learning algorithms that then become baked into AI systems. Part of the problem is the lack of diversity in the computing profession where those that develop AI systems fail to identify the potential for bias or do not adequately test in different populations across the lifecycle of development.

Black box nature of AI systems

The ’black box’, opaque nature of AI systems is complicated. AI is ‘opaque’ because it is often invisibly infused into computing systems in ways that can influence our interactions, decisions, moods and sense of self without us being aware of this.

The ‘black box’ of AI is twofold:  The proprietary nature of AI products creates a situation where industry does not open up the workings of the product and its algorithms for public or third party scrutiny. In cases of deep machine learning there is an autonomous learning and decision-making process which occurs with minimal human intervention, with this technical process being so complicated that even the computer scientists that have created the program cannot fully explain why the machine came to a decision it did.

Digital human rights issues

Digital human rights issues related to the harvesting the ‘Big Data’ used in ML where humans have not given informed consent or where data is used in ways that were not consented to. Issues of consent and privacy extends to the surreptitious collection, storage and sharing of biometric (of the body) data. Biometric data collection represents a threat to the human right to bodily integrity and is legally considered sensitive data that require a very careful and fully justified position before implementation, especially with vulnerable populations such as children.

Deep fakes

We are in a world of ‘deep fakes’ and AI-produced media that ordinary (and even technologist) humans cannot discern as real or machine-generated. This represents a serious challenge and interesting opportunities to teaching and practicing digital literacy. There are even AI programs that produce more than passable written work on any topic.

The potential for a lack of independent advice for educational leaders making decisions on use of AI and ML

Regulatory capture is where those in policy and governance positions (including principals) become dependent on potentially conflicted commercial interests for advice on AI-powered products. While universities may have in-house expertise or the resources to buy-in independent expertise to assess AI products, principals making procurement decisions will probably not be able to do this. Furthermore, it is incumbent on educational bureaucracies to seek independent expert advice and be transparent in their policies and decision-making regarding such procurement so that school communities can have trust that the technology will not do harm through biased systems or by violating teacher and students sovereignty of their data and privacy. 

Our report offers practical advice

Along with our report, the project included infographics on Artificial Intelligence and virtual and augmented reality for students, and ‘short read’ literature reviews for teachers.

In the report we carefully unpack the multi-faceted ethical dimensions of AI and ML for education systems and offer the customised Education, Ethics and AI (EEAI) framework  (below) for teachers, school leaders and policy-makers so that they can make informed decisions regarding design, implementation and governance of AI-powered systems. We also offer a practical ‘worked example’ of how to apply it.

While it is not possible to unpack it all in a blog post, we hope Australian educators can use the report to lead the way in using AI-powered systems for good and for what they are good for.

We want to avoid teachers and students using AI-systems that ‘feel more and more like magic’ and where educators are unable to explain why a machine made a decision that it did in relation to student learning. The very basis of education is being able to make ‘fair calls’ and to transparently explain educational action and, importantly, to be accountable for these decisions.

When we lose sight of this, at a school or school-systems level, we find ourselves in questionable ethical and educational territory. Let’s not disregard our core strength as educators in a rush to appear to be innovative.

We are confident that our report is a good first step in prompting an ongoing democratic process to grapple with ethical issues of AI and ML so that school communities can weather the approaching storm.

Erica Southgate is an Associate Professor of Education at the University of Newcastle, Australia. She believes everyone has the potential to succeed, and that digital technology can be used to level the playing field of privilege. Currently she is using immersive virtual reality (VR) to create solutions to enduring educational and social problems. She believes playing and experimenting with technology can build a strong skill and mind set and every student, regardless of their economic situation, should have access to amazing technology for learning. Erica is lead researcher on the VR School Research Project, a collaboration with school communities. Erica has produced ethical guidelines and health and safety resources for teachers so that immersive VR can be used for good in schools. She can be found on Twitter @EricaSouthgate

For those interested in the full report: Artificial Intelligence and Emerging Technologies in Schools

Artificial Intelligence (AI) in schools: are you ready for it? Let’s talk

Interest in the use of Artificial Intelligence (AI) in schools is growing. More educators are participating in important conversations about it as understanding develops around how AI will impact the work of teachers and schools.

In this post I want to add to the conversation by raising some issues and putting forward some questions that I believe are critical. To begin I want to suggest a definition of the term ‘Artificial Intelligence’ or AI as it is commonly known.

What do we mean by ‘Artificial Intelligence’?

Definitions are tricky because the field is so interdisciplinary, that is it relates to many different branches of knowledge including computer science, education, game design and psychology, just to name a few.

I like the definition offered by Swedish-American physicist and cosmologist Max Tegmark. He describes Artificial Intelligence systems as being ‘narrowly intelligent because while they are able to accomplish complex goals, each AI system is only able to accomplish goals that are very specific.’

I like this definition because it mentions how complex AI can be but makes us focus on the reality that AI is narrowly focused to fulfill specific goals.

We already live in a world full of AI systems including Siri, Alexa, GPS navigators, self-driving cars and so on. In the world of education big international companies are currently working on or already marketing AI systems that develop “intelligent instruction design and digital platforms that use AI to provide learning, testing and feedback to students”.

We need to begin to pay attention to how AI will impact pedagogy, curriculum and assessment in schools, that is, how it will impact end users (teachers and students). There is a lot to think about and talk about here already.

Artificial Intelligence in Education

Conversations about Artificial Intelligence in Education ( AIEd) have been going on for many years in the world of education. This year the London Festival of Learning organised by Professor Rose Luckin and her team brought together scholars from around the world in the fields of AIEd, Learning at Scale ( large scale online learning platforms) and the Learning Sciences.

Closer to home the NSW Department of Education has been on the front foot in raising awareness of AIEd in a series of papers in its Future Frontiers agenda. This is a compilation of essays that canvas “perspectives from thought leaders, technology experts and futurists from Australia and around the world.” These are helpful articles and thought pieces. They are worth checking out and can serve to inform nascent conversations you might want to have about AIEd.

Questions for schools and teachers

It is important for researchers and teacher educators like myself to explore how AIEd will supplement and change the nature of teachers’ work in schools. We need to understand how this can be done in education so that the human intelligence and the relational roles of teachers dominate.

How will schools be involved? And how could the changing education landscape be managed as the subject of AIEd attracts more attention?

Leading research scientist and world expert in AIEd at University College London, Professor Rose Luckin (who incidentally is a former teacher, school governor, and AI developer/computer scientist), captures the core argument when it comes to school education. She says: It’s more about how teachers and students will develop sufficient understanding of AIEd so that it can be augmented by human intelligence when determining what AIEd should and should not be designed to do. For example, Luckin suggests if only purely technological solutions dominate the agenda then what AIEd can offer for change and transformation in teaching and learning will be limited.

The Australian Government’s Innovation and Science Australia (2017) report, Australia 2030, recommends prioritisation of the “development of advanced capability in artificial intelligence and machine learning in the medium- to long-term to ensure growth of the cyber–physical economy”.

It also lists Education as one of its “five imperatives for the Australian innovation, science and research system” that will equip Australians with skills relevant to 2030, thus highlighting the need to understand the implications of AIEd for schools.

Critical moment for school education

There is conclusive international evidence that we are at a critical moment for setting clearer directions for AIEd in school education.

With crucial questions being asked internationally about AIEd and local reports like Australia 2030 published we must start to probe Australian policy makers, politicians, school principals, students and parents, as well as the teaching profession more broadly about such vital issues. Indeed the NSW Department of Education held a forum to this end in 2017 and I understand more are planned.

Schools are one focus of the agenda, but how are teacher education programs in universities preparing preservice teachers for this future? Are we considering questions of AI in our preparation programs? If we need to lift the skill levels of all school students to work in an AI world then what changes might we need to make to accommodate AI in school curriculum, assessment, pedagogy, workload and teacher professional learning?

The debate about robots replacing teachers is not the main event. There will be assistants in the form of a dashboard/s for instance but humans will still do all the things that machines cannot do.

Moreover there is also a great need for deeper understandings of learning analytics. There are also questions of opaque systems, bias in algorithms, and policy/governance questions around data ethics. Such topics could form foundational programs in teacher education courses.

More hard questions

What implications do AIEd and automated worlds have for school infrastructure? How can higher education and industry support schools to be responsive and supportive to this rapidly changing world of AI?

Leaping back to the London Festival of Learning for one moment, Professor Paulo Blikstein, from Stanford University, in his keynote address painted a grim picture of the dangers that lie ahead and he told his audience that it is time to ‘make hard choices for AIEd.’

He explained a phenomenon of We Will Take It From Here (WWTIFH) that happens to researchers. It is when tech businesses tell researchers to ‘go away and play with their toys’ and that they will take over and develop the work technologically … taking over things “in the most horrible way”. Blikstein outlined how most tech companies use algorithms that are impervious and don’t consult with the field – there are few policy or ethical guidelines in the US that oversee decision making in these areas – it’s a “dangerous cocktail” described by Blikstein’s formula of:

WWTIFH + Going Mainstream + Silicon Valley Culture + Huge Economic Potential = DANGER.

I agree with his caution in that people in positions of power in teaching and learning in education need to be aware of the limitations of AI. It can help decision makers but not make decisions for them. This awareness becomes increasingly important as educational leaders interact and work more frequently with tech companies.

In teacher education in Australian universities we must begin to talk more about AIEd with those whom we teach and research. We should be thinking all the time about what AI really is and not be naïve and privilege AI over humans.

As you might sense, I believe this is a serious and necessary dialogue. There are many participants in the AIEd conversation and those involved in education at all levels in Australian schools have an important voice.

 

Dr Jane Hunter is an early career researcher in the STEM Education Futures Research Centre, Faculty of Arts and Social Sciences at the University of Technology Sydney. She was a classroom teacher and head teacher in schools both in Australia and the UK. Jane is conducting a series of STEM studies focused on building teacher capacity; in this work she leads teachers, school principals, students and communities to better understand and support education change. Her work in initial teacher education has received national and international recognition with a series of teaching awards for outstanding contributions to student learning. She enjoys writing and her research-based presentations at national and international conferences challenge audiences to consider new and alternate education possibilities. A recent podcast with Jane on AIEd can be heard here. Follow her on Twitter @janehunter01

 

Note from the editor: The counters on our sharing buttons are broken ( just the counters not the sharing function). Hopefully they will be working again soon.