Artificial Intelligence technology and teaching

ChatGPT: What do we know now? What must we learn next?

I was honoured to join a TEQSA/CRADLE panel last week, the third in a series on the implications of ChatGPT (or GenAI more broadly) for higher education. In the second panel in March, I flagged the absence (at that early stage) of any evidence about whether students have the capacity to engage critically with ChatGPT. So many people were proposing to do interesting, creative things with students — but we didn’t know how it would turn out.

But three months on, we now have:

* myriad demos of GPT’s capabilities given the right prompts

* a few systematic evaluations of that capability

* myriad proposals for how this can enable engaging student learning and a small but growing stream of educators’ stories from the field with peer reviewed research about to hit the streets.

I also urge us to harness the diverse brilliance of our student community in navigating this system shock, sharing what we’re learning from our Student Partnership in AI. 

The following is an edited transcript of my presentation to the panel.

Educators are now beginning to be able to articulate what ChatGPT literacy looks like,  they have a sense of what the range of ability is within their cohort, and they’re beginning to gain insights into how to better scaffold their students. So for example, I’ve been talking to some of my most exciting colleagues here at UTS, asking them to tell me, how are you using ChatGPT? And in particular, what’s the capacity of your students to engage critically with its output? That is something we hear a lot about all the time. Three months ago, we really couldn’t articulate what that looked like. Now we can. So let me give you four glimpses of what some of my colleagues were saying to me. 


Antonette Shibani  teaches applied NLP to data science master’s students – they have to write a critical summary and a visual map of ethical issues using NLP. They’re encouraged to use ChatGPT for various purposes and to reflect on their use of it, and how useful it was. So the most able students, she tells me, could engage in deep conversations with AI, and they were using excellent prompts and follow up replies to the agent, whereas the less able students were using simple prompts to access content they didn’t have deeper discussions with the AI.


Here’s Baki Kocaballi, teaching interaction design, and the students are using GPT to develop user personas and scenarios, ideating solutions and reflecting critically. The most able students were doing this, they were generating rich scenarios with ChatGPT. Yet he was not seeing any critical reflection on what made an AI output an appropriate or accurate response. And Baki is reflecting that this may be something to do with the subjective nature of design practice. The less able students could still get some good responses but not much good reflection going on. And he notes that he needs to provide more scaffolding and examples for the students. So we see this this professional learning as well amongst the teachers. 


Here’s Anna Lidfors Lindqvist, training student teams to work together to build robots, and again encouraging their use of ChatGPT and reflecting critically on it. The most able students could use it in quite fluent and effective ways. But the less able students  she notes, they’re  not really validating and checking GPT’s calculations. They’re struggling to apply information in the context of the project. Some actually just dropped GPT altogether. It’s just too much work to actually get it to do anything useful.


And a final example from Behzad Fatahi, teaching Yr 2-3 students, they’re using ChatGPT but they’re also using a simulation tool called Plexus to analyze soil structure interaction problems. The most engaged students were behaving as shown, but the least engaged students were struggling and were behaving like this. So, the point is not so much the details — the point is that our academics are starting to know, what does good look like? What can I expect from my students? There is clearly a diversity of ability to engage critically with a conversational, generative AI.

And when you step back from these particular examples and asked again, what are going to be the foundational concepts and evidence as it grows, around what we could call generative AI literacy, for learning, not for marketing, not for any other purposes that can be useful — for learning.

Conversational agents are not new in the world of education. They’ve been around in the AI and research literature for donkey’s years, but used well, they should move us towards more dialogical learning and feedback. So we’re all used to thinking about student peer feedback, learning designs, they’re now going to be interacting with agents. Those agents will be interacting with them and potentially with other agents as well, playing different roles and we will learn how to orchestrate these agents and define the roles they need to play. 

And every turn in this conversation is a form of feedback. The question is what move does the student make next? How do they engage with that feedback from humans and machines?

Now, we have concepts and evidence from pre-generative AI research around this. We have concepts such as student feedback literacy, and we have been taking inspiration from that and talking about student automated feedback literacy now. There is the notion of teacher feedback literacy as well, and similarly, we’re working on teacher automated feedback literacy. So these are powerful concepts I think, for helping us think about how we can study and equip students to engage in powerful learning conversations.

The final point I want to make is we need to work with our students.

We’ve been working with our Students’ Association here at UTS. We had over 150 applicants for a workshop where we took a stratified sample of 20. They engaged in pre-workshop readings where we presented them with a range of dilemmas involving ChatGPT and Turnitin, took part in online discussions and had a face to face workshop. They got briefings from UTS experts introducing generative AI, explaining how it’s being used creatively at UTS, such as the examples I just showed you,alking about the ethical issues around generative AI and talking about Turnitin what do we know about it? And should we turn it on? That is a decision we’re trying to make at UTS at the moment.reakout groups, a plenary discussion and we have a report currently under review by the students as to whether they have they’re happy with that as a summary of what they talked about.

But let me just share three examples of what they told us and you’ll see some echoes here with what we heard from Rowena Harper earlier. 

  • Firstly, they are asking, please equip us to use ChatGPT for learning. We are aware that it could actually undermine our learning if we don’t use it well, but what does that mean? You’re the educators — you should be able to tell us how to use it effectively for learning and not in a way that torpedoes our learning. 
  • Secondly, can we have more assessments, integrating ChatGPT in sensible ways. They were very excited to see the examples such as the ones I showed you because not all of them have experienced that yet. And finally, Turnitin. Well, yes, it may have a role to play as part of an overall approach to academic integrity. But please handle with care. If there are any questions about our academic integrity, we want to be invited for a respectful conversation, and not be accused of misconduct when, as we are already hearing, Turnitin is backing off from some of its original claims about how good its software is. It’s a very fast-moving arms race. 

So just to wrap up,  here are three questions about what we need to learn next. 

  • What do we mean by generative AI literacy and how do we scaffold it? 
  • How well do generative AI learning designs translate across contexts? They may look very promising, but we have to actually deploy those and study them in context. 
  • And finally, how are we going to engage our students in codesigning this radical shift with us? We talk a lot about diversity of voices and the design of AI. We absolutely need them on board, trusting the way we’re using this technology, seeing that we’re using it responsibly and ethically, and bringing the perspectives that they have. They’re the ones on the receiving end of all this policy we’re talking about.

Simon Buckingham-Shum is the director of the Connected Intelligence Centre at the University of Technology Sydney. He has a career-long fascination with the potential of software to make thinking visible. His work sits at the intersection of the multidisciplinary fields of Human-Computer Interaction, Educational Technology, Hypertext, Computer-Supported Collaboration and Educational Data Science (also known as Learning Analytics).

Five thoughtful ways to approach artificial intelligence in schools

The use of artificial intelligence in schools is the best example we have right now of what we call a sociotechnical controversy. As a result f of political interest in using policy and assessment to steer the work that is being done in schools, partly due to technological advances and partly due to the need for digital learning during COVID lockdowns, controversies have emerged regarding edtech and adaptive technologies in schools. 

An emerging challenge for research has been how to approach these controversies that require technical expertise, domain expertise and assessment expertise to fully grasp the complexity of the problem. That’s what we term a ‘sociotechnical controversy’, an issue or problem where one set of expertise is not enough to fully grasp, and therefore respond, to the issue at hand. A sociotechnical problem requires new methods of engagement, because: 

  1. No one set of expertise or experience is able to fully address the multi-faceted aspects of a sociotechnical controversy.
  2. We need to create opportunities for people to come together, to agree and disagree, to share their experience and to understand the limits of what is possible in a given situation. 
  3. We have to be interested in learning from the past to try to understand what is on the horizon, what should be done and who needs to be made aware of that possible future. In other words, we want to be proactive rather than reactive in the policy space.

We are particularly interested in two phenomena seemingly common in Australian education. The first of these concerns policy, and the ways that governments and government authorities make policy decisions and impose them on schools with little time for consideration, resourcing devoted to professional preparation and awareness of possible unintended consequences. Second, there tends to be a reactive rather than proactive posture with regard to emerging technologies and potential impacts on schools and classrooms. 

This particularly pertains to artificial intelligence (AI) in education, in which sides tend to be drawn over those who proselytize about the benefits of education technology, and those worried about robots replacing teachers. In our minds, the problem of AI in education could be usefully addressed through a focus on the controversy in 2018 regarding the use of automated essay scoring technology, that uses AI, to mark NAPLAN writing assessments. Our focus on this example was because it crystallised much about how AI in schools is understood, is likely to be used in the future and how the impacts that it could have on the profession. 

On July 26 2022, 19 academic and non-academic stakeholders, including psychometricians, policy scholars, teachers, union leaders, system leaders, and computer scientists, gathered at the University of Sydney to discuss the use of Automated Essay Scoring (AES) in education, especially in primary and secondary schooling. The combined expertise of this group spanned: digital assessment, innovation, teaching, psychometrics, policy, assessment, privatisation, learning analytics, data science, automated essay feedback, participatory methodologies, and emerging technologies (including artificial intelligence and machine learning). The workshop adopted a technical democracy approach which aimed not at consensus but productive dialogue through tension. We collectively experimented with AES tools and importantly heard from the profession regarding what they knew would be challenges posed by AES for schools and teachers. Our view was that as AI, and tools such as AES are not going away and are already widely used in countries like the United States, planning for its future use is essential. The group also reinforced that any use of AI in schools should be rolled out in such a way as to place those making decisions in schools, professional educators, at the centre of the process. AI and AES will only be of benefit when they support the profession rather than seek to replace it..

Ultimately, we suggested five key recommendations.

  1. Time and resources have to be devoted to helping professionals understand, and scrutinise, the AI tools being used in their context. 
  2. There needs to be equity in infrastructure and institutional resourcing to enable all institutions the opportunity to engage with the AI tools they see as necessary. We cannot expect individual schools and teachers to overcome inequitable infrastructure such as funding, availability of internet and access to computational hardware. 
  3. Systems that are thinking of using AI tools in schools must prioritise Professional Learning opportunities well in advance of the rollout of any AI tools. This should be not be on top of an already time-poor 
  4. Opportunities need to be created to enable all stakeholders to participate in decision-making regarding AI in schools. It should never be something that is done to schools, but rather supports the work they are doing.
  5. Policy frameworks and communities need to be created that guide how to procure AI tools, when to use AI, how to use AI why schools might choose not to use AI in particular circumstances. 

From working with diverse stakeholders it became clear that the introduction of AES in education should always work to reprofessionalise teaching and must be informed by multiple stakeholder expertise. These discussions should not only include policymakers and ministers across state, territory, and national jurisdictions but must recognise and incorporate the expertise of educators in classrooms and schools. A cooperative process would ensure that diverse stakeholder expertise is integrated across education sector innovation and reforms, such as future AES developments. Educators, policymakers, and EdTech companies must work together to frame the use of AES in schools as it is likely that AES will be adopted over time. There is an opportunity for Australia to lead the way in the collective development of AES guidance, policy, and regulation. 

Link to whitepaper & policy brief. https://www.sydney.edu.au/arts/our-research/centres-institutes-and-groups/sydney-social-sciences-and-humanities-advanced-research-centre/research.html

Greg Thompson is a professor in the Faculty of Creative Industries, Education & Social Justice at the Queensland University of Technology. His research focuses on the philosophy of education and educational theory. He is also interested in education policy, and the philosophy/sociology of education assessment and measurement with a focus on large-scale testing and learning analytics/big data.

Kalervo Gulson is an Australian Research Council Future Fellow (2019-2022). His current research program looks at education governance and policy futures and the life and computing sciences. It investigates whether new knowledge, methods and technologies from life and computing sciences, with a specific focus on Artificial Intelligence, will substantively alter education policy and governance.

Teresa Swist is Senior Research Officer with the Education Futures Studio and Research Associate for the Developing Teachers’ Interdisciplinary Expertise project at the University of Sydney. Her research interests span participatory methodologies, knowledge practices, and emerging technologies. She has a particular interest in how people with diverse expertise can generate ideas, tools, and processes for collective learning and socio-technical change.

Virtual Reality in school education: Australia leads the way with groundbreaking research

In 2016, I attended a meeting and fortuitously sat next to the (now retired) principal of Callaghan College who asked me what type of research I’d like to do in schools. At the time a new high-end, highly immersive type of virtual reality (VR) hardware called the Oculus Rift had been released. This type of VR equipment was costly and needed an expensive computer to run but offered entry into amazing worlds. It provided high fidelity environments to be explored through gestural interaction via controllers that allowed you to use your virtual hands to interact with virtual objects and avatars (either other people or computer characters) and navigate in ways that felt incredibly embodied (I am addicted to flying and jumping off clouds in VR).

 I made a gentle pitch that I’d like to work with teachers to embed this technology into classrooms to see how it could be used for learning but that I had no idea what we might find. And so began the VR School Study, a collaboration with Callaghan College and later, Dungog High School, both government high schools in NSW, Australia.  It became the first research internationally to embed high-end VR in school classrooms.

VR School Study

The VR School Study is ongoing participatory research that aims to explore the use of immersive virtual reality in real classrooms. We focus on how VR can be used to enhance learning, its relationship to curriculum, and its implications for pedagogy. And we examine all the practical, ethical and safety issues that come with integrating emerging technology in classrooms. At the end 2018, the study reached a major milestone with the completion of two major case studies into the use of the technology in secondary schools.

An ‘arduous’ adventure in emerging technology

IN 2018, on the last day of research at Callaghan College, I interviewed two teachers about what it was like to embed an emerging technology in the classroom. The response was, ‘Arduous comes to mind.’ While we did have a laugh, the comment summed up a range of issues encountered during the research.

Space to accommodate VR and safety concerns

Trying to find an available classroom space large enough to accommodate the play areas needed for this VR, which is best used standing and moving around, proved difficult. On one campus we managed to get a room with a small storeroom off it that squeezed in three sets of VR equipment with play areas while at the other we had a larger former lab-preparation room attached to a classroom. Both VR rooms were beyond the immediate supervisory gaze of the teacher and so required me or a student to act as a safety ‘spotter’ to ensure there were no collisions with walls, furniture or peers. Even though there is a built in ‘Guardian System’ (a pop-up virtual cage mapped to the real environment you should stay within), some students became so immersed that they ignored it and needed intervention. Even now with ‘pass through’ cameras in some VR headsets (these allow the user to see the outside environment when they go beyond the Guardian System) some people become so immersed and are interacting with such speed that they can run into objects. Engineered safety solutions are not always enough to maintain safety.

Network and server issues

Getting the tech to work within the confines of the school internet network proved difficult. Game stores that allow multiplayer environments were blocked and internet work-arounds required. Teachers had to set-up individual student accounts which was time-consuming and often update applications in their own time. Our screen capture video, which showed a first-person view of what the student was seeing and doing in a virtual environment, indicated that the technology failed 15% of the time due to network, server and VR tracking drop-out. One of my favourite moments in student humour and resilience was when I heard one boy say to another as they who were fixing a server issue for the third time, “Aren’t you glad you signed up for this?”.

Content mastery and creativity through collaboration

Students were given the highest quality VR and ‘sandbox’ applications, such as Minecraft VR and Tilt Brush which allowed them to create in virtual environments without needing to code. Combined with clever curriculum design they undertook self-directed formative assessment tasks.

In Year 9 science this involved groups researching and developing a model of a body organ in Minecraft VR. The results were an astounding mix of scientific knowledge melded with creative endeavour developed through group problem-solving and collaboration inside and outside of VR.

Brain from up high

One group produced an anatomically correct, labelled eyeball which was toured by via a rollercoaster while another built a skyscraper of a brain sitting atop a spinal cord which you flew up to interact with engineered components representing neurons. While in VR, students narrated from memory the parts and function of the brain. Analysis of the screen capture video using a framework adapted from  work by Assistant Professor in Learning and Learning Processes the University of Oulu, Jonna Malmberg, indicated that the majority of students used the creative properties of VR to engage in highly collaborative science learning.

Inside the brain

At Dungog High School a senior drama class used single-player 3-D sculpting program Tilt Brush, as an infinite virtual design studio to explore symbolism in set design at real life scale and beyond. Students worked in groups to quickly prototype symbolic elements of their directorial vision with peers and the teacher moving in and out of VR to offer feedback. Mistakes were erased or changes made at the press of a button. The virtual studio of Tilt Brush melded with the drama studio to offer students an opportunity to view their design in 3D from the perspective of an audience member, director, designer or actor. All they needed to do was teleport round the virtual environment to do this.

Let’s leave behind the EdTech evangelism

An admission – I’m not a fan of the type of innovation discourse which permeates university managerial-speak and is associated with EdTech (educational technology) evangelism. This type of talk conjures up images of momentous leaps in ways of doing and knowing with the trope of the lone (male, yes it is a gendered) genius leading the charge with their vision of the future.

Innovation is incorrectly depicted as a development shortcut detached from contexts and the years of work that yield incremental improvements and insights, as Stanford University Director, Christian Seelos, and colleague Johanna Mair, argue. They warn against evaluating innovation only on positive outcomes as this can stifle experimentation required to progress an initiative in difficult or unpredictable environments.

This aligns with critical studies in EdTech where research is on the ‘state-of-the-actual’ rather than the ‘state-of-the-art’, as Distinguished Research Professor in the Faculty of Education, Monash University, Neil Selwyn reminds us. It entails moving away from trying to ‘prove’ a technology works for learning to scrutinizing what actually takes place especially in contexts that are not the ‘model’ well-resourced schools where technologies are often tested.

Teleporting away for now

As I have argued elsewhere, to get the best ethical and educational outcomes with emerging technologies we must carefully incubate these in schools (and not just resource-rich ones) in collaboration with willing teachers so that we can document incremental ‘innovation’ through ‘state-of-the-actual’ reporting. This can be an arduous project but one full of authentic and valuable insights for those willing to go on a research and pedagogical adventure. It’s this type of evidence, not EdTech evangelism, that we need.

For those who want more. In May 2020, I published findings from the study in Virtual Reality in Curriculum and Pedagogy: Evidence from Secondary Classrooms (Routledge). As co-researchers, teachers from Callaghan College and Dungog High School contributed to their respective chapters in this book. The book offers new pedagogical frameworks for understanding how to best use the properties of VR for deeper learning as well as a ‘state-of-the-actual’ account of the ethical, practical and technical aspects of using VR in low-income school communities.

Erica Southgate (PhD) is Associate Professor of Emerging Technologies for Education at the University of Newcastle, Australia. She is lead author of the recent Australian Government commissioned report, Artificial intelligence and emerging technologies (virtual, augmented and mixed reality) in schools research report, and a maker of computer games for literacy learning. Erica is always looking for brave teachers to collaborate with on research and can be contacted at Erica.southgate@newcastle.edu.au. Erica is on Twitter@EricaSouthgate

Artificial intelligence in Schools: An Ethical Storm is Brewing

Artificial intelligence will shape our future more powerfully than any other innovation this century. Anyone who does not understand it will soon find themselves feeling left behind, waking up in a world full of technology that feels more and more like magic.’ (Maini and Sabri, 2017, p.3)

Last week the Australian Government Department of Education released the world-first research report into artificial intelligence and emerging technologies in schools. It is authored by an interdisciplinary team from the University of Newcastle, Australia.

As the project lead, and someone interested in carefully incubating emerging technologies in educational settings to develop an authentic evidence-base, I relished the opportunity to explore the often-overlooked ethical aspects of introducing new tech in to schools. To this end, I developed a customised ethical framework designed to encourage critical dialogue and increased policy attention on introducing artificial intelligence into schools.

We used to think artificial intelligence would wheel itself in to classrooms in the sci-fi guise of a trusty robo-instructor (a vision that is unlikely to come true for some time, if ever). What we didn’t envisage was how artificial intelligence would become invisibly infused into the computing applications we use in everyday life such as internet search engines, smartphone assistants, social media tagging and navigation technology, and integrated communication suites.

In this blog post I want to tell you about artificial intelligence in schools, give you an idea of the ethical dilemmas that our educators are facing and introduce you to the framework I developed.

What is AI (artificial intelligence)?

Artificial intelligence is an umbrella term that refers to a machine or computer program that can undertake tasks or activities that require features of human intelligence such as planning, problem solving, recognition of patterns, and logical action.

While the term was first coined in the 1950s, the new millennium marked rapid advancement in AI driven by the expansion of the Internet, availability of ‘Big Data’ and Cloud storage and more powerful computing and algorithms. Applications of AI have benefited from improvements in computer vision, graphics processing, and speech recognition.

Interestingly, adults and children often overestimate the intelligence and capability of machines, so it is important to understand that right now we are in a period of ‘narrow AI’ which is able to do a single or focused task, sometimes in ways that can outperform humans. The diagram below from our report (adapted from an article in The Conversation by Arend Hinz, Michigan State University ‘s Assistant Professor of Integrative Biology & Computer Science and Engineering) provides an overview of types of AI and current state-of-play

AI in education

In education, AI is in some intelligent tutoring systems and powers some pedagogical agents (helpers) in educational software. It can be integrated into the communication suites marketed by Big Tech (for example in email) and will increasingly be part of learning management systems that present predicative and data-driven performance dash boards to teachers and school leaders. There is also some (very concerning) talk of integrating facial recognition technology into classrooms to monitor the ‘mood’ and ‘engagement’ of students despite research suggesting that inferring affective states from facial expression is fraught with difficulties.

Engaging with AI in education also involves an understanding of machine learning (ML), whereby algorithms can help a machine learn to identify patterns in data and make predictions without having pre-programmed models or rules.

Worldwide concern about the ethics of AI and ML

The actual and conceivable ethical implications of AI and ML have been canvassed for several decades. Since 2016, the US, UK and European Union have conducted large scale public inquiries which have grappled with question of what a good and just AI society would look like.

As Umeå University’s Professor of Computing Science, Virginia Dignum, puts it

What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated?’

Most pressing ethical issues for education

Some of the most pressing ethical issues related to AI and ML in general, and especially for education include:

AI bias

AI bias where sexist, racist and other forms of discriminatory assumptions are built into the data sets that are used to train machine-learning algorithms that then become baked into AI systems. Part of the problem is the lack of diversity in the computing profession where those that develop AI systems fail to identify the potential for bias or do not adequately test in different populations across the lifecycle of development.

Black box nature of AI systems

The ’black box’, opaque nature of AI systems is complicated. AI is ‘opaque’ because it is often invisibly infused into computing systems in ways that can influence our interactions, decisions, moods and sense of self without us being aware of this.

The ‘black box’ of AI is twofold:  The proprietary nature of AI products creates a situation where industry does not open up the workings of the product and its algorithms for public or third party scrutiny. In cases of deep machine learning there is an autonomous learning and decision-making process which occurs with minimal human intervention, with this technical process being so complicated that even the computer scientists that have created the program cannot fully explain why the machine came to a decision it did.

Digital human rights issues

Digital human rights issues related to the harvesting the ‘Big Data’ used in ML where humans have not given informed consent or where data is used in ways that were not consented to. Issues of consent and privacy extends to the surreptitious collection, storage and sharing of biometric (of the body) data. Biometric data collection represents a threat to the human right to bodily integrity and is legally considered sensitive data that require a very careful and fully justified position before implementation, especially with vulnerable populations such as children.

Deep fakes

We are in a world of ‘deep fakes’ and AI-produced media that ordinary (and even technologist) humans cannot discern as real or machine-generated. This represents a serious challenge and interesting opportunities to teaching and practicing digital literacy. There are even AI programs that produce more than passable written work on any topic.

The potential for a lack of independent advice for educational leaders making decisions on use of AI and ML

Regulatory capture is where those in policy and governance positions (including principals) become dependent on potentially conflicted commercial interests for advice on AI-powered products. While universities may have in-house expertise or the resources to buy-in independent expertise to assess AI products, principals making procurement decisions will probably not be able to do this. Furthermore, it is incumbent on educational bureaucracies to seek independent expert advice and be transparent in their policies and decision-making regarding such procurement so that school communities can have trust that the technology will not do harm through biased systems or by violating teacher and students sovereignty of their data and privacy. 

Our report offers practical advice

Along with our report, the project included infographics on Artificial Intelligence and virtual and augmented reality for students, and ‘short read’ literature reviews for teachers.

In the report we carefully unpack the multi-faceted ethical dimensions of AI and ML for education systems and offer the customised Education, Ethics and AI (EEAI) framework  (below) for teachers, school leaders and policy-makers so that they can make informed decisions regarding design, implementation and governance of AI-powered systems. We also offer a practical ‘worked example’ of how to apply it.

While it is not possible to unpack it all in a blog post, we hope Australian educators can use the report to lead the way in using AI-powered systems for good and for what they are good for.

We want to avoid teachers and students using AI-systems that ‘feel more and more like magic’ and where educators are unable to explain why a machine made a decision that it did in relation to student learning. The very basis of education is being able to make ‘fair calls’ and to transparently explain educational action and, importantly, to be accountable for these decisions.

When we lose sight of this, at a school or school-systems level, we find ourselves in questionable ethical and educational territory. Let’s not disregard our core strength as educators in a rush to appear to be innovative.

We are confident that our report is a good first step in prompting an ongoing democratic process to grapple with ethical issues of AI and ML so that school communities can weather the approaching storm.

Erica Southgate is an Associate Professor of Education at the University of Newcastle, Australia. She believes everyone has the potential to succeed, and that digital technology can be used to level the playing field of privilege. Currently she is using immersive virtual reality (VR) to create solutions to enduring educational and social problems. She believes playing and experimenting with technology can build a strong skill and mind set and every student, regardless of their economic situation, should have access to amazing technology for learning. Erica is lead researcher on the VR School Research Project, a collaboration with school communities. Erica has produced ethical guidelines and health and safety resources for teachers so that immersive VR can be used for good in schools. She can be found on Twitter @EricaSouthgate

For those interested in the full report: Artificial Intelligence and Emerging Technologies in Schools

Our schools need to take a mighty leap into the future: let’s dump outmoded practices and mindsets

On October 5th 1979 stuntman Kenny Powers attempted to jump his rocket powered Lincoln Continental car from Canada to the USA across the St Lawrence River; a jump of 1.6 kilometres. The preparation took more than four years; it was costly (more than one million dollars), methodical and exacting. When the day finally came for the jump the car flew about fifteen metres and plunged into the river seriously injuring the stuntman. In the end, no matter how careful the preparation of the equipment or how experienced the team or highly trained the stuntman they fell woefully short.

Fast-forward to today and our schools face a similar jump. We have spent years preparing ourselves, training, restructuring, ‘harmonising’, recruiting and developing our people. However we currently don’t have the capacity to make the jump from old ways of thinking and doing in schools, to approaches that are going to help us jump the gap from rhetoric in policy to the realities of teaching in an uncertain world. We have lots to say in our policies about creativity, innovation, effective and authentic collaboration, perceptive critical reflection and incisive communication, but in schools our teachers face complex problems, fixed mindsets and outmoded practices.

In the University of Sydney’s recently released report Preparing for the Best and the Worst of Times we discuss the complexities created for our schools from the rapid rise of Artificial Intelligence and the need for a focus on what we call ‘learning dispositions’ to respond effectively to that challenge. As we see it, Australian schools have the resources they need in energy, hope and compassion but they lack the structures and processes to make the jump.

Foundations for learning

We believe the foundations of lifelong learning are creativity, collaboration, communication and critical reflection. We call these the 4Cs. They help students move beyond just remembering ‘facts’ to help them make connections between ideas and create new ones. Capacities like collaboration and communication are not ‘caught’ or picked up by accident. Schools need to explicitly teach them through clear frameworks and diverse learning strategies. As our report recommends, these capacities also need to be put into real world settings and with real world problems. These capacities help our young people make their way through the many challenges life presents (and not only at work) by helping them understand and enact strategies that enhance and extend collaboration rather than close down ideas or opportunities because of poor collaborative practices.

We have been using the 4C approach to learning to help schools prepare for the jump; to take them from the vision of education outlined in aspirational documents such as Gonski 2 to the realities of everyday teaching in 2018 for that uncertain future.

In NSW we have now more than 30 schools who are working with our 4C team and are attempting the jump.

Learning dispositions

We believe children adopt different learning dispositions such as curiosity and focus that are critical for them to sustain and apply their learning. To be an effective learner it is not sufficient to just have good thinking skills. Effective learners need focus, empathy and teamwork so they can apply their learning to different and complex contexts. In other words schools need to develop all of these dispositions because they are interdependent. Students who are deficient in the cognitive, intrapersonal or interpersonal dispositions will struggle to sustain, apply and adapt their learning at and beyond formal education settings. We invented a learning dispositions wheel to help explain what the dispositions are and how they can work together.

The learning disposition wheel is based on groundbreaking research from the US National Research Council Education for Life and Work: Developing Transferable Knowledge and Skills for the 21st Century. Inventing the wheel was a breakthrough in our work with the Hospital School at Westmead who began working with us on the 4Cs in late 2016.

*The Learning Disposition Wheel © 4C Transformative Learning

The wheel has many uses but it is particularly helpful in the early stages of school transformation to establish a common metalanguage and a shared understanding of learning. In the schools we work with, the wheel provides a coherent and shared understanding for leadership, teachers and the community to work towards with individual students, staff and the school as whole organisation.

Our work with The Hospital School

The Hospital School at The Children’s Hospital, Westmead, in NSW may not be familiar to many. This school supports the education of students in Kindergarten to Year 12, who are hospitalised for more than 10 days. For some students, the hospital school is the only school setting they attend all year. The job of the school is doubly difficult in some ways; they need to tailor learning to children and young people who are at their most vulnerable while providing education that allows them to keep up when they leave hospital.

The school saw the potential to enhance their effectiveness for their students by developing learning capacities (through the learning disposition wheel) rather than focusing solely on curriculum content. The teachers and leadership had become frustrated with ‘content delivery’ that did not support the learning of students. The 4C approach was implemented (as it is in most of our partner schools) through a tailored mix of

  • intensive mentoring of leadership,
  • in and out of classroom support and mentoring of teachers
  • in depth professional learning with teaching teams
  • collaborative classroom visits, and
  • network meetings across 4C schools

Through the 4C program staff were introduced to the learning disposition wheel as a tool for diagnosing learning challenges to target specific areas where students required support, such as ‘grit’ or ‘curiosity’, ‘think why and how’ or ‘influence’. The learning disposition wheel was implemented with their normal curriculum (not instead of) to build deep and relevant learning for these students. The 4C learning team worked with the teaching staff to develop a whole school approach that delivered consistency and a common learning and teaching framework including a common metalanguage for the teaching and leadership groups.

The school staff has noticed a clear and remarkable change in student behaviour and engagement. Many students who have previously resisted schooling at this school are now enthusiastic and engaged around attendance and learning. There have also been key breakthroughs with the medical staff who are adopting the Learning Disposition Wheel to ensure that the learning and the medical approaches are integrated. For their students, the capacities they have built through the learning disposition wheel can be applied when they move back to their census school and beyond.

Similar success in student engagement and achievement is present in many of the schools that have engaged with the 4C transformation approach. In some ways if this approach can work in the unique circumstances of The Hospital School, they can and do work in other schools with their own unique and context driven features. The effectiveness of the approach works in multiple and variable contexts as it focusses on a negotiation between context and our strategies for deep learning rather than a ‘bolt on’ take it or leave it package.

Transforming Schools: a rationale for scaleable change

The 4C approach is not only about shifting curriculum. When working effectively it will transform school organization, school culture and help shape a vision for viable 21st Century schooling. The 4Cs form the basis of moving our schools from being ‘museums of pedagogy’ to vital, energetic, flexible and resilient places where learning directly meets the needs of a world where knowledge is ubiquitous but explicit skills for life and work are not. In these schools, classrooms and staff rooms, collaboratively led teaching teams are transforming learning and teaching through this approach. But it takes will, energy, inquiry, courage, determination, and most of all, it takes an explicit understanding of how to teach these sometimes elusive concepts.

The Artificial Intelligence infused world our students face is complex, contradictory and to some extent more chaotic than the world this schooling system was designed for. And yet our school systems have only changed incrementally (at best). Simultaneously, the world of work is changing so that all jobs will be changed and many in health, law, and transport will cease to exist or will be changed fundamentally. While not a cause for undue alarm, our report Preparing for the Best and the Worst of Times focusses on the steps we need to take in education to respond to the potential technological ‘‘colonisation’’ of human work by focusing on the learning dispositions our students require. Schools cannot ignore the looming changes and pretend ‘business as usual’ will adequately prepare our students for these tectonic shifts.

We believe schools need to be enabled to fundamentally change. And teachers need more than policy; they need support to make these capacities understandable and teachable for their students. More broadly, teachers need political, policy and resource support to make hard changes a reality through effective professional learning and partnerships for their school communities.

At The Hospital School and in many other schools who have embarked on this transformation, the changes are deepening and extending the learning. Like Kenny Powers (our stunt driver) we are staring at a chasm with a schooling system that is not yet fit for purpose. What we have on our side though is the bountiful and generative resource; a schooling system with inspired leaders, capable teachers and the almost boundless energy of our students. Let’s hope we have the expertise, wit, courage and vision to make the jump.

 

Professor Michael Anderson is Professor in the Faculty of Education and Social Work at The University of Sydney. His is an internationally recognised educational leader. He has taught, researched and published in education and transformation for over 20 years including 13 books and 55 book chapters and journal articles. His international research and practice focus on how the 4Cs can be integrated using coherent frameworks to make learning meet the needs of 21st Century learners.

 

 

Dr Miranda Jefferson is co-founder and innovative practice leader of 4C Transformative Learning. She has been involved in leading innovation in schools for over 20 years. She leads programs, initiatives and research in curriculum reform, educational change and school transformation in several schools. Miranda has taught drama and media arts learning and teacher professional practice in the Education Faculty at the University of Sydney. She has also been on advisory boards for ACARA.

 

 

For those interested in more about the 4Cs and School Transformation program

The Learning Disposition Wheel is explained fully in our recent book Transforming Schools.

 

*Copyright for The Learning Disposition Wheel is the property of 4C Transformative Learning and may not be reproduced without permission.

 

 

Six reasons Artificial Intelligence technology will never take over from human teachers

The next twenty years will see teachers under increasing pressure to convincingly justify their existence. Advances in artificial intelligence (AI) technologies are already prompting calls for teaching to be automated, learner-driven and ‘teacher-proof’. While these technologies might still require non-specialised classroom facilitators and technicians, the role of the highly trained expert teacher is coming under increasing threat. There is a growing sense that “we don’t really need teachers in the same way anymore”.

Put bluntly, the entire premise of ‘the teaching profession’ faces an impending challenge. In a future where education can be reliably provided by machines, why continue to invest millions of dollars in training human experts to do the job? Given the likely trajectory of technological developments over the next few decades, is there anything that an expert teacher does that machines will never be able to do? As an education researcher and teacher, I would like to think that there is! Here, then, are six aspects of expert human teaching which are getting overlooked in the current rush toward automating the classroom:

1. Human teachers have learned what they know

There is clear benefit from being with someone who can pass on knowledge, especially someone who has previously been in the position of having to learn that knowledge. This latter qualification is a uniquely human characteristic. When a learner learns with an expert teacher, they are not simply gaining access to the teachers’ knowledge but also benefiting from the teachers’ memories of learning it themselves. Technology can be pre-loaded with content of what is to be learned. Yet, no AI technology is going to ‘learn’ something exactly the way a human learns it, and then help another human learn accordingly.

2. Human teachers make cognitive connections

A human is uniquely placed to sense what another human is cognitively experiencing at any moment, and respond accordingly. In this sense, face-to-face contact with a teacher offer learners a valuable opportunity to engage in the process of thinking withanother human brain. On one hand, there is something thrilling about witnessing an expert who is modelling the process of thinking things through. Conversely, a human teacher is also able to make a personal ‘cognitive connection’ with another individual who is attempting to learn. As David Cohen puts it, teachers are uniquely able to “put themselves into learners’ mental shoes”. Despite the best efforts of computer science, many aspects of thinking cannotbe detected and modelled by machines in this way.

3. Human teachers make social connections

Teaching is a mutual obligation between teachers and learners. No teacher can stimulate the learning process without the cooperation of those who are learning. Good teachers make personal connections with their students, helping them gauge what might work best at any particular time. Before attempting to intellectually engage with a group, teachers will “take a mental pulse of students’ demeanours”. Teachers work hard to establish this mutual commitment to learning, as well as sustaining engagement through motivating, cajoling and enthusing individuals. All of these are interpersonal skills that come naturally to people rather than machines.

4. Human teachers talk out loud

There is something transformative about being in the presence of an expert teacher talking about their subject of expertise. Listening to an expert talk can provide a real-time, unfolding connection with knowledge. A good speaker does not stick rigidly to a written text, but refines, augments and alters their argument according to the audience reactions. A teacher speaking to a group of learners therefore engages in a form of spontaneous revelation. Key to this is the teacher’s role in leading and supporting learners to engage in active listening. As Gert Biestareasons, being addressed by another person interrupts one’s ego-centricism – drawing an individual out of themselves and forcing them into sense-making.

5. Human teachers perform with their bodies

The bodies of human teachers are an invaluable resource when engaging learners in abstract thought. Teachers use their bodies to energize, orchestrate and anchor the performance of teaching. Many subtleties of teaching take place through movement – pacing around a room, pointing and gesturing. Teachers make use of their ‘expressive body’ –  lowering their voice, raising an eyebrow or directing their gaze. Crucially, a human will respond to the living biological body of another human in a completely different way to even the most realisticsimulation. Being looked in the eye by another person is a qualitatively different experience than being looked at by a 3D humanoid robot, let alone a 2D cartoon agent on a screen.

6. Human teachers improvise and ‘make do’

A key part of good teaching is the human capacity to improvise. Rather than sticking tightly to a pre-planned script, teachers will adjust what they do according to the circumstances. Like most performative events, teachers approach a session with a rough plan or structure. However, thereafter they improvise their way around these aims and objectives. Teaching requires acts of creativity, innovation and spontaneity – akin to dancing or playing jazz. Teachers and learners feel each other out, find common ground and build upon it. Teaching also demands a tolerance for imprecision, messiness and not knowing. Most human actions involve a degree of guesswork, bluff and willingness to ‘make do. These are processes that computer systems are largely incapable of.

As these examples illustrate, an expert human teacheris able to support learning in ways that can never be fully replicated through technology. Unfortunately, these qualities remain largely unrecognised, even by teachers themselves. Many educators consider teaching to be an ‘unconscious’ act that is difficult to pin down and articulate. Yet such coyness does little to dispel the technology-driven arguments currently being made against the teaching profession. Teachers need to speak up and make an irrefutable case for the continued presence of expert professionals at the forefront of classrooms.

So how can we rehabilitate human teachers in the minds of their detractors? The uphill battle in countries like Australia is to revitalise schools and classrooms to allow teachers to work in the ways just outlined. These are all characteristics that a good teacher should have, but are considerably restricted in an era of ‘teaching out-of-field’, templated lesson plans and rigid standardised testing.

A first step in this direction might be to alter the ways that people think and talk about teaching. Teachers need to speak forcibly about these qualities – amongst themselves, within their professional associations, withparents, politicians, pundits and anyone else with influence. Teachers also need to argue directly against the tech industry and corporate reformers looking to replace them with machines. There is obvious value in the human expert teacher. Yet unless teachers are able to make a convincing case, they may well lose the argument before they even realise that there was one.

 

Neil Selwyn is a professor in the Faculty of Education, Monash University (Australia). He previously worked in the UCL Institute of Education, and Cardiff School of Social Science (UK). Neil is currently writing a book on the topic of robots, AI and the automation of teaching. Over the next six months he will be posting writing on the topic, hopefully resulting in: Selwyn,  N. (2019) Should Robots Replace Teachers? Cambridge, Polity.

Neil can be found on Twitter @neil_selwyn