State of the Art and Practice in AI in Education (Euro J of Education 2022) PDF

Summary

This article reviews existing Artificial Intelligence (AI) systems in education and their pedagogical foundations. It discusses different uses of AI in education and learning, highlighting varying interpretations of AI and education. Potential challenges and limitations of AI in education are also explored.

Full Transcript

DOI: 10.1111/ejed.12533 ORIGINAL ARTICLE State of the art and practice in AI in education Wayne Holmes1 | Ilkka Tuomi2 1 UCL Knowledge Lab, IOE, UCL's Faculty of Education and Society, University College Abstract London (UCL), London, UK...

DOI: 10.1111/ejed.12533 ORIGINAL ARTICLE State of the art and practice in AI in education Wayne Holmes1 | Ilkka Tuomi2 1 UCL Knowledge Lab, IOE, UCL's Faculty of Education and Society, University College Abstract London (UCL), London, UK Recent developments in Artificial Intelligence (AI) have 2 Meaning Processing Ltds., Helsinki, Finland generated great expectations for the future impact of AI Correspondence in education and learning (AIED). Often these expectations Wayne Holmes, UCL Knowledge Lab, IOE, have been based on misunderstanding current technical UCL's Faculty of Education and Society, University College London (UCL), 23-­29 possibilities, lack of knowledge about state-­of-­the-­art AI in Emerald Street, London WC1N 3QS, UK. education, and exceedingly narrow views on the functions Email: [email protected] of education in society. In this article, we provide a review of existing AI systems in education and their pedagogic and educational assumptions. We develop a typology of AIED systems and describe different ways of using AI in educa- tion and learning, show how these are grounded in differ- ent interpretations of what AI and education is or could be, and discuss some potential roadblocks on the AIED highway. 1 | I NTRO D U C TI O N 1.1 | A brief history of the future of AI in education In recent years, it has often been claimed that Artificial Intelligence (AI) is the ‘new oil’ (e.g., Palmer, 2006) or, as the Director General of UNESCO suggested in her 2019 Mobile Learning Week keynote, the biggest invention since the palaeolithic age. More recently, it has even been claimed (Lemoine, 2022), and refuted (e.g., Sparkes, 2022), that one AI system, the LaMDA dialogue system developed by Google, has become sentient. Whatever the reality, there have been massive investments in AI technology around the world (as much as US$ 94 billion in 2021 alone; Statista, 2022), as well as high-­profile policy statements about the need to promote and regulate this emerging technology (e.g., EC, 2018; OECD, 2019b; UNESCO, 2021). This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2022 The Authors. European Journal of Education published by John Wiley & Sons Ltd. 542 | wileyonlinelibrary.com/journal/ejed  Eur J Educ. 2022;57:542–570. 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 543 The potential of AI for education and learning (the application of AI in education), and the role of education in developing what has become known as AI literacy (the teaching of AI in education), have also received increased attention, and are fast becoming hot topics in policy debates (e.g., Miao & Holmes, 2021). As learning, innovation and knowledge creation are often claimed to be the foundation for the post-­industrial economy, the increasing interest is easy to understand. Beyond the simple, though vague and controversial, idea of automating teacher tasks (Selwyn, 2019; Sperling et al., 2022), it has also been suggested that the transformative effect of AI could be in augmenting human cognition in learning (e.g., Molenaar, 2022; Tuomi, 2018). At the political level, the potential of AI in educational settings, and the need for AI literacy, therefore, puts ed- ucators at the centre of these new exciting developments that used to be confined to obscure computer-­science laboratories. At the same time, teachers and administrators are expected to have clear views about the potential of AI in education and, eventually, adopt this ground-­breaking technology in their practice. Research and development in AI for education (AIED) has to a large extent been driven by computer scientists (Chen et al., 2020; Williamson & Eynon, 2020; Zawacki-­Richter et al., 2019). Over the past decade, the situation has changed, and AIED is now also a focus of commercial interests. The AIED market is predicted to grow rapidly: worldwide, there are already more than thirty multi-­million-­dollar-­funded AIED corporations, and the market is expected to become worth more than US$ 20 billion within five years (GMI, 2022). A teacher overwhelmed by stories about AI miracles may well wonder if the future is defined by yet another attempt to push technology into the classroom. Many current commercial AI systems developed for education, so-­called Intelligent Tutoring Systems (ITS), focus on providing automated, adaptive and individualised instruction, an approach that we explore in more detail below. The close historical link between AI and cognitive science (Gardner, 1985) has meant that many influential AI sys- tems in education have been built on cognitive architectures, themselves based on the idea that the human brain is an information processor. Learning, in this view, is centrally about the development of problem-­solving capacity that rests on the availability of efficient knowledge structures in the human mind (e.g., Koedinger et al., 2012). A historically important source for this line of research was a small book, How to Solve It, published in the 1940s by Georg Polya (1945). Based on his studies of learning, the book presented different types of ‘heuristics’, processes or shortcuts that can be used to solve scientific and other problems. According to Polya, a key process in problem solving is to find ways to decrease the distance between the expected result and known solutions that move towards the ultimate solution. Thus, problem solving consists of finding chains of heuristic operations (which may be summarised as: understand the problem, make a plan, execute the plan, and look back and reflect) that lead to the result. In the 1950s, Alain Newell and Herbert Simon, the key pioneers of AI and cognitive science, realised that by programming computers with such heuristics and processing symbols instead of numbers, computers could solve problems in a similar way to humans. Apparently, this was not a huge programming feat. Edward Feigenbaum, another key figure in AI research, later recalled that in 1956 Simon came to his class declaring: “over Christmas, Alain Newell and I invented a thinking machine” (quoted in McCorduck, 1979, p. 116). The level of expectation is well expressed in the name that they gave to their most famous computer program from that time, The General Problem Solver (Newell et al., 1959). The technological origins of student focused AIED, however, can be traced to the work of Sidney Pressey, B. F. Skinner, Gordon Pask, and Jaime Carbonell (Holmes et al., 2019). The first mechanised multiple-­choice ­machine was devised almost a hundred years ago by Pressey. Drawing on Edward Thorndike's law of effect, Pressey's ­machine gave immediate feedback to the student—­and, in so doing, such machines “clearly do more than test him; they also teach him” (Pressey, 1926, p. 375). Pressey also suggested that this type of technology might save teacher time by relieving them of marking, a claim that is frequently made for AIED today (e.g., Baker & Smith, 2019). In the 1950s, Skinner, known as the father of behaviourism, developed what he called his ‘teaching machine’, which required students to compose their own answers rather than choose from multiple options. Skinner argued that his machine acted like an individual tutor, closely foreshadowing ITS: “The machine itself, of course, does not 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 544 | HOLMES and TUOMI teach … but the effect upon each student is surprisingly like that of a private tutor” (Skinner, 1958, p. 971). At around the same time, the first truly adaptive ‘teaching machine’ was developed by Pask, known as the self-­adaptive key- board instructor or SAKI. Designed for trainee punch-­card keyboard operators, SAKI delivered tasks according to the learner's individual performance on earlier tasks (Pask, 1982). However, the first explicit application of main- stream AI techniques to computer-­aided instruction was made by Carbonell, for his 1970 PhD thesis. His system, which he called SCHOLAR, generated individual responses to student statements by drawing on a network of concepts linked according to their semantic relationships. This history of AI is relevant to AIED as many of the most researched and visible student-­focused AI systems are direct descendants of these ideas. We will discuss this in more detail below as we describe some exemplar AIED systems. It is also important to recognise that, for example, computer supported collaborative learning has been its own discipline since the mid-­1990s (e.g., Dillenbourg et al., 2009), and learning analytics and educational data mining (Joksimovic et al., 2020; Siemens & Baker, 2012) have also close linkages with AIED. Learning as a developmental and socio-­cultural phenomenon, however, has only recently become a more common topic in AIED (e.g., Thomas & Porayska-­Pomsta, 2022). Beyond teaching (i.e., student-­focused AIED), AI has potentially interesting applications in education admin- istration (i.e., system-­focused AIED) and teacher support (i.e., teacher-­focused AIED), and could even stimulate new pedagogical and andragogical approaches. Meanwhile, educational data, as generated by e-­learning systems including AIED, are increasingly the subject of analysis by the fast-­growing field known variously as educational data mining and learning analytics, and are becoming important for policy and for practice (Hakimi et al., 2021; Verbert et al., 2020). For educators, it is also useful to reflect on these multiple connections in the broader context of education. In much of the existing student-­focused AIED research and development, the ultimate rationale of using AI has been that it can lead to learning gains in specific knowledge domains independently of human teachers. A ‘learning gain’ is typically measured in pre-­test and post-­test experiments as a percentage improvement of possible improvement for a student, given the pre-­study level of knowledge. When the teaching objective is assumed to be the acquisition of pre-­defined knowledge content that can be assessed with a test, learning gain is a natural indicator of success—­ which is perhaps why it is a focus of most well-­known AIED adaptive systems. In fact, some influential ITS, such as the ASSISTments system, started explicitly as a system for high-­stakes test preparation (Heffernan & Heffernan, 2014); and many widely used Chinese AIED systems are also focused on teaching for the test (Knox, 2020). The acquisition of pre-­defined knowledge content is one function of education. Biesta calls this the qualifica- tion function: providing students with “the knowledge, skills, and understandings… that allow them to ‘do something’” (Biesta, 2011, p. 19). The other two key functions of education, according to Biesta, are socialisation, which “has to do with the many ways in which, through education, we become part of particular social, cultural and political ‘orders’” (Biesta, 2011, p. 20), and subjectification or individuation, the process “that allow[s] those educated to become more autonomous and independent in their thinking and acting” (Biesta, 2011, p. 21). These two latter functions have ­received comparatively little attention from AIED researchers. Instead, over the last three decades, a key conceptual starting point for student-­focused AIED has been ­‘mastery learning’, a pedagogic model advanced by Benjamin Bloom (Bloom, 1968; cf. Guskey, 2012). This model underpins most ITS, as well as the notion that AI can ‘personalise’ learning. The objective of mastery learning is to get all students to a level of competence that allows them to effectively move ahead, along the learning path described in the curriculum. Bloom argued that, because students start with different prior knowledge and have different capabilities, they need different amounts of instruction to get to the same level of mastery in a given topic. Thus, mastery learning requires individual-­level differentiation or ‘personalisation’ of instruction, for which AIED systems have been proposed and designed as an answer. Bloom later showed that individualised tutoring combined with mastery learning leads to two standard devia- tions higher learning gains than traditional whole-­class teaching (Bloom, 1984). This huge potential improvement, known as the 2-­sigma effect, has been a key inspiration for AIED researchers for more than forty years. In more 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 545 recent AIED research, the impact of human tutoring has been found to be below one standard deviation, which the research also suggests has already been achieved by some ITS (VanLehn, 2011). Nonetheless, the 2-­sigma effect also provides a rarely contested implicit model of the function of education among AIED researchers, which ignores Biesta's socialisation and individuation functions. Even when measures of the impact of AIED tools such as ITS are limited to test-­oriented learning gains, there remains much controversy. There are many studies that have shown learning gains when ITS have been used in the classroom (cf. du Boulay, 2019), but the results are not unambiguous. Only rare examples exist of indepen- dent evaluations at scale (e.g., Roschelle et al., 2016), while meta-­level reviews (e.g., Kulik & Fletcher, 2016; Ma et al., 2014) show modest learning gain impacts across many relatively small settings. More granular studies, however, suggest that the effects of learning interventions provided by a system depend on the student's prior knowledge level (while novice learners benefit from worked examples, more experienced learners benefit from problem solving without worked examples—­the so-­called “expertise reversal effect”, Koedinger et al., 2012, p. 788). Classroom-­level aggregation, therefore, potentially distorts existing meta-­analyses. Strictly speaking, we do not know for sure if AIED ‘works’ or not. More importantly, it is not entirely clear what such ‘working’ would comprise. In fact, impact studies typically explore what happens when a new pedagogic idea or functionality is added to an existing AIED system. For example, the What Works Clearinghouse, run by the Institute of Educational Sciences at the United States Department of Education, only counts evidence from randomised controlled trials or ‘high-­quality’ quasi-­experimental trials. Accordingly, in contrast to many small trials that have shown moderate learning gains, the What Works Clearinghouse reports often show rather minor or indeterminate average learning gains at the group level (e.g., What Works Clearinghouse, 2016, which reported that Cognitive Tutor, one of the best known ITS, had mixed effects for students on algebra, no discernible effects on general mathematics, and po- tentially negative effects on geometry). In other words, robust independent evidence (to the standard expected in, for example, medical research) remains elusive in AIED research. Ambiguous results, methodological and practical challenges, lack of independence and scale, and more fundamental questions about the aims of education have left ample space to contest any generic claims about the benefits of student-­focused AIED. Yet, great expectations remain about the future impact of AIED. For example, according to one leading AI entrepreneur, Kai-­Fu Lee (formerly a senior executive at Google, Microsoft, SGI, and Apple): […] a classroom today still resembles a classroom one hundred years ago. We know the flaws of to- day's education—­it is one-­size-­fits-­all yet we know each student is different, and it is expensive and cannot be scaled to poorer countries and regions with a reasonable student-­to-­teacher ratio. AI can play a major part in fixing these flaws and transform education […] With AI taking over significant aspects of education, basic costs will be lowered, which will allow more people to access education. It can truly equalize education by liberating course content and top teachers from the confines of elite institutions and delivering AI teachers that have near-­zero marginal cost. […] I believe this symbiotic and flexible new education model can dramatically improve accessibility of education, and also help every student realize his or her potential in the Age of AI. (Lee & Qiufan, 2021, p. 118) Historically, the emergence of new key technologies have led to exaggerated and over-­optimistic expectations, subse- quent economic crashes, and the gradual articulation of technological possibilities over relatively long periods of time (Nemorin et al., 2022; Perez, 2002). Indeed, all visions of the future are based on our understanding of history. AIED has a long history. To understand where it is going, it helps to understand where it came from. The next section revisits attempts to define the topic of AI and briefly describes the state-­of-­the-­art in the two related areas of (1) data-­driven AI and (2) knowledge-­based AI. After that, the third section of this article provides a taxonomy of AIED that separates student-­, teacher-­, and institution-­focused AIED—­locating examples of state-­of-­the-­art AIED for each of these differ- ent types. The fourth section outlines a number of potential road-­blocks on the AIED highway, including challenges related to human rights, ethics, personalisation, impact, techno-­solutionism, AIED colonialism, and the commercialisa- tion of education. The concluding section summarises and looks into the possible futures of AIED. 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 546 | HOLMES and TUOMI 2 | W H AT I S A I? There have been many attempts to define and clearly describe what we are talking about when we talk about Artificial Intelligence (AI), a name that we capitalise to highlight that it is a specific field of inquiry and development, and not simply a type of intelligence that is artificial. The literature provides many alternative definitions and it is often claimed that there exists no single dominant definition accepted by most AI experts. It is, however, impor- tant to note that useful and usable definitions of AI depend on what they are used for. Academic researchers often state that AI is a complex domain of research that includes many different conceptual approaches and domains of expertise, and sometimes emphasise the point that there is no such a thing as AI. EU regulation, in contrast, focuses on AI products that need access to the common market. Meanwhile, much of the debate about AI is about hypothetical futures, inspired by science-­fiction and the belief that machines could one day be intelligent—­whatever that means. One classical definition specifies that AI is research that develops technologies that can do things that would require intelligence if done by humans (Minsky, 1969). This approach originates from Turing, who proposed that if a simulation of an intelligent human cannot be distinguished from a real person, questions about intelligence become irrelevant (Turing, 1950). Many cognitive scientists and some AI researchers and philosophers have adopted a stronger view, arguing that research on AI can reveal how the human mind works (Gardner, 1985). Policy-­developers, in turn, have focused on economically disruptive AI systems that can be regulated, certified, and put on the market. The OECD has provided an influential definition along these lines (OECD, 2019a, p. 7). This has been extended by the EU's High-­Level Expert Group on Artificial Intelligence (AI HLEG) in a conceptually important way, highlighting their capacity to learn, noting that AI systems may also adapt their behaviour based on the results of their actions (AI HLEG, 2019a). This definition from the AI HLEG, which encompasses both AI research and AI systems, is itself accompanied by several pages of explanatory comments. A more straightforward definition (for those of us who are neither computer scientists nor legal experts), which builds upon the OECD and AI HLEG definitions, is provided by UNICEF. AI refers to machine-­based systems that can, given a set of human-­defined objectives, make predic- tions, recommendations, or decisions that influence real or virtual environments. AI systems inter- act with us and act on our environment, either directly or indirectly. Often, they appear to operate autonomously, and can adapt their behaviour by learning about the context. (UNICEF, 2021, p. 16) The UNICEF definition is useful for several reasons (Holmes & Porayska-­Pomsta, 2023). First, it does not depend on data: it does accommodate data-­driven AI techniques (such as artificial neural networks and deep learning), but it can also include knowledge-­based or symbolic AI, and any new paradigm of AI that might emerge in future years. Second, it retains the role of humans, which is important given the critical role of humans at all stages of the AI development pipeline. For example, AI systems always make their recommendations, predictions and decisions based on objectives that are specified by the system designer at the time the system is developed. In fact, as most current AI systems es- sentially are behaviouristic stimulus–­response systems, it has also been suggested that AI should more appropriately denote Artificial Instincts (Tuomi, 2018). Third, the UNICEF definition distinguishes between systems that do operate autonomously and those that appear to operate autonomously, reminding us of the original Mechanical Turk which fooled many into believing it was genuinely automatic (Schaffer, 1999). Finally, as this definition emphasises interac- tion with humans, it can easily be extended for the application of AI in education. The AI regulation proposed by the European Commission (AI Act; EC, 2021) builds on the definition of an AI sys- tem developed at the OECD, but falls back to listing technologies that characterise AI systems. As the aim of the AI Act is to regulate products, it is important to know which products are in the scope of regulation. Because of this, the final definition to be used in the AI Act is, at the time of writing, hotly debated by the Council of the European Union, the Parliament, and other stakeholders. Legal definitions will also be important in education; and in particular, in public 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 547 education where everyday practice is strongly regulated by existing laws. Beyond legal definitions, the reality of AI systems, however, comprise many different techniques, technologies and specialties (Miao & Holmes, 2021). In many cases, experts find enough family resemblance and can agree on whether a given system is AI or not, but definitions have evolved over time and probably will continue to do so. For educators, it is important, however, to recognise two alternative approaches for developing AI systems, that have both been in development and vying for ascendance throughout the history of AI (beginning with the infamous Dartmouth College workshop in 1956, where the term ‘Artificial Intelligence’ was first used). One ap- proach, the one that is currently ascendant and has led to many of the successes frequently mentioned in the press, can be called data-­driven AI, or machine learning. The other is knowledge-­based or symbolic AI. Data-­driven AI might have great potential in education, depending on the intentions of the system, but knowledge-­based AI still underpins most existing AIED systems. A third conceptual model, hybrid AI that combines data-­driven and knowledge-­based approaches with human cognition, is briefly discussed in the conclusion. 2.1 | Data-­driven AI Data-­driven AI has produced impressive results in the last decade in computer vision, natural language processing, robotics, and in many other areas widely reported in the media. All these systems are based on a very simple idea. Given a large enough set of data items and a criterion for ‘improvement,’ a computer can gradually find a model that optimises its predictions. When a computer program results in a wrong prediction in its training phase, the program adjusts its behaviour so that the error becomes smaller; and when the program makes the right predic- tion, it adapts so that it makes the same prediction with higher probability. A fundamental technical challenge is to know how the system should adjust its behaviour. Data-­driven AI systems solve this problem by using basic calculus. It is possible to calculate how much the output of the system would change if any of the system parameters were incrementally changed. For each system parameter, the direc- tion of maximal change is known as its gradient. With very many very small steps, the system parameters can be adjusted so that the system makes ‘good-­enough’ predictions. State-­of-­the-­art AI systems can have a huge number of parameters that are repeatedly adjusted until the system works. For example, OpenAI's famous GPT-­3 language model has 175 billion adjustable parameters (Brown et al., 2020). A key source of inspiration for the development of self-­learning computer systems was the human brain (Rosenblatt, 1958). ‘Artificial neural networks’ consist of simple ‘computational neurons’ that map their input states to outputs, which, in turn, are inputs to other artificial neurons. Modern artificial neural networks consist of many such linked layers, each with multiple computational mappings from one layer to another. Although the idea of using adaptive neural elements for pattern recognition is an old one, a practical challenge for the early develop- ers was to find a way to adjust their behaviour so that predictions made by the system improve. Since the 1970's, it has been known that the chain rule of elementary calculus can be used for this. Even a complex interlinked network of adaptive computational neurons is just a mapping from inputs to outputs. In mathematical terms, it is just a function. The derivatives and gradients of the parameters in its various layers, therefore, can be calculated using the chain rule of calculus that expresses these in terms of the final system output. All state-­of-­the-­art data-­driven AI systems, therefore, work in the following way. First, the system is given some input data, and it is allowed to make a prediction. The error in prediction is used to calculate a ‘loss’ that the system tries to minimise during its training phase by adapting its parameter values. This is repeated often millions of times, until the accuracy of the system is good enough. This process is known as ‘training’ the AI model. When the system has enough parameters, it can adapt to any mathematical function and, given a specific input, perfectly predict the correct output. However, such a perfect mapping, between known inputs and known outputs, would make the system quite useless as the aim is to create accurate predictions to inputs that have not been used in training. This is done by ‘testing’ the system accuracy with independent test data. When the system makes good 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 548 | HOLMES and TUOMI predictions also with test data, it is said to have ‘generalised’ well. At that point, the developers are said to have trained a model, and the model can then be used for actual work. Many highly successful data-­driven AI systems consist of dozens or more layers, each with millions of connec- tions among their computational ‘neurons’. The simple adaptation process described above, therefore, is called ‘deep learning’, and the computation of gradients from the final output layer towards the input level is known as ‘backpropagation’. A fascinating outcome of such adaptation is that the ‘lower’ levels that are close to the data input, often learn to recognise statistically significant simple features, and the ‘higher’ level layers build more complex abstractions. For example, in image processing, the lower level ‘neurons’ recognise simple image features such as lines, edges, and curves, whereas the higher levels detect more complex features built on these, such as corners and circles. At even higher levels up in the processing chain, features such as eyes, ears, fur, and wings are detected if these are present in the images used for the system training. Such deep learning models can with very high probability recognise objects in digital images, and they can also be used, for example, to generate automatic captions to video streams, detect human faces, or, with some reconfiguration, play chess, generate paintings in Picasso-­style, make fake videos, and write essays based on user-­given prompts. It is, however, important to reassert the role of humans in this process. It is humans who collect or curate the data (e.g., the images or texts) and humans who write the algorithms and decide what they are used for. Most importantly, humans define the criterion for ‘accurate prediction’ and select algorithms that they believe show the most potential. In addition, although there has been much debate about the dangers of making algorithmic decisions without a human ‘in-­the-­ loop’ (AI HLEG, 2019b; Dignum, 2018; Floridi et al., 2018), at the end, it is the humans who interpret the output and what it means. Data-­driven AI has seen extraordinary advances during the last decade because of three key factors (Tuomi, 2018). First, the gradual adjustment of the many parameters in these systems often require trillions of ad- ditions and multiplications. For example, the training of the GPT-­3 language model required the equivalent of over 1000 days of computation at one petaflop (105) per second processing capacity (Brown et al., 2020). Normal com- puter processors are not suitable for this, but relatively cheap graphics processors developed for computer games during the last two decades turned out to be perfectly suited for the task. New processor chip architectures have also emerged over the last years that further optimise computations needed for data-­driven AI. Second, the train- ing of data-­driven AI requires huge amounts of data. These have become available as the Internet has increasingly been used for images, videos, text, and as the users continuously generate an avalanche of click data. For example, to develop the GPT-­3 model, about 400 billion words were collected from the Internet. Third, especially in image processing, where data-­driven AI had its first major success about ten years ago, the advances in data-­driven AI have required massive human effort in labelling images found on the Web. The ImageNet dataset that was used to train many breakthrough AI systems, was created by about 49,000 persons from 167 countries working on the Amazon's Mechanical Turk task-­sharing platform (Denton et al., 2021). This would not have been possible without Internet-­enabled collaboration of thousands of people across the globe. Such wide-­spread scavenging of data from the Internet, however, has its problems. For example, in developing their ground-­breaking DeepFace facial recognition technology, Facebook engineers used without explicit consent four million photographs of people that had been uploaded and labelled by Facebook users. In recognition of the dubious ethics of such a practice, in 2021 Facebook announced that they would be deleting the database of col- lected photographs. However, the algorithms that were trained with that database are still in use (Sparkes, 2021). It has been noted that—­due to its trial-­and-­error method of training—­data-­driven AI is actually based on the most inefficient way to do computations invented by humankind (Tuomi, 2020). With huge amounts of training data and many parameters to adjust, computational requirements may become overwhelming. Because of this, even the largest AI companies are now struggling to find sufficient computing power to train their AI models. State-­of-­the-­art data-­driven natural language models are now trained using billions of trillions of computations and hundreds of billions words scraped from the Internet. Training OpenAI's GPT-­3 has been estimated to have required as much energy as driving a car to the moon and back, thus generating an equivalent of 85,000 kg of CO2 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 549 emissions (Quach, 2020). Energy consumption is now widely understood to be a major challenge for data-­driven AI (Strubell et al., 2019). The human brain, on the other hand, operates with about 20 Watts of energy, which sug- gests that data-­driven AI is based on different principles than human intelligence (Tuomi, 2020). In practice, the computational dead-­e nd of data-­driven AI is for the time being avoided by re-­using already developed models. In many cases, it is possible to throw away the highest-­level layers in a deep learning model optimised for an earlier task, and re-­t rain the model for the task at hand. AI researchers call this ‘transfer learning’. If, for example, a system has learned to recognise dogs, cats, cars, birds, and bicycles, its lower-­level representations are often useful for other image processing tasks, for example, face recognition or satellite image analysis. Today probably a large majority of AI systems are developed using this approach, building on openly accessible models from the key AI developers, such as Facebook, Google, iFLYTEK, Microsoft, OpenAI, and Tencent. Finally, while the successes of data-­driven AI have been impressive, it remains important not to be misled by common hyperbole. As Leetaru (2018) notes: A neural network of today no more ‘learns’ or ‘reasons’ about the world than a linear regression of the past. They merely induce patterns through statistics. Those patterns may be opaquer, more me- diated and more automatic than historical approaches and capable of representing more complex statistical phenomena, but they are still merely mathematical incarnations, not intelligent entities, no matter how spectacular their results. (Leetaru, 2018, paragraph 8) 2.2 | Knowledge-­based AI Due to its spectacular successes, data-­driven AI has dominated the press in recent years. In education, however, knowledge-­based AI continues to play a central role. Knowledge-­based systems are based on the idea that human knowledge and expertise can be represented in a form that can be processed by computer programs. Some of these systems have also been called expert systems as they have often been used to imitate expert decision-­ making. Expert systems were common in the 1980s, but the high cost of modelling domains of expertise and maintaining knowledge representations, as well as the difficulty of generalising and transferring domain models to new application areas, have limited the popularity of this approach (Tuomi, 2018). In educational applications of AI, many systems contain a domain model that describes a conceptual structure of the area of study. In real-­world settings such domain models are often difficult and expensive to define as the world is open and changing. In closed formal worlds, such as in mathematics, stable domain models are easier to develop. This is one reason why knowledge-­based intelligent tutoring systems have been relatively successful in mathematics and physics. The intelligence of knowledge-­based AI is in the conceptual structures extracted from human experts. In knowledge-­based systems a computer is, therefore, used in a different way than when it is used as a simple pro- grammed calculation or text processing machine. Instead of using a step-­by-­step algorithm that calculates a result from input data, knowledge-­based systems use a simple but generic inference engine that selects stored heuristic rules that tell the machine what to do next. The rules, themselves, are higher-­level descriptions of what is known about the domain in question. This additional level of abstraction and knowledge representation is what differ- entiates knowledge-­based systems from traditional computer programming. Typically, the rules are described in human readable ‘if-­then’ sentences. Because of this, knowledge-­based AI is also sometimes called ‘rule-­based AI’. Until recently, almost all AI systems in education were based on this approach. Many of these systems are briefly described below. Knowledge-­based systems, however, remain purely mechanistic systems, where a specific input always produces a specific output in a deterministic process of algorithmic calculation. An important consequence of this—­in contrast with data-­driven AI—­is that the behaviour of the system can be explained by studying the programmed logic. 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 550 | HOLMES and TUOMI 3 | A TA XO N O M Y O F A I E D AI is applied in education (AIED) in multiple ways. Accordingly, it is not possible either to draw aggregate infer- ences or, for example, make grand claims about AIED's efficacy or otherwise. Instead, to facilitate meaningful debate, we need to be clear about which of the multiple variations of AIED applications we are discussing—­ especially as many remain speculative while some are questionable for ethical, pedagogical, or educational reasons. To this end, it is useful to classify AIED tools and application into a typology of three distinct yet overlapping categories: (1) student-­focused, (2) teacher-­focused, and (3) institution-­focused AIED (Holmes et al., 2019). No doubt these categories can be argued over, as can which category applies to which AIED tool, but they do provide a useful framing for facilitating discussion. In Table 1, we present an overview of the ­t axonomy of AIED, in which we identify AIED applications as either speculative (*), researched (* *), or commer- cially available (* * *). In the next section, we provide a brief summary of each type of AIED application, extending the more detailed discussion in Holmes et al. (2019). TA B L E 1 A taxonomy of AIED systems STUDENT-­FOCUSED AIED Intelligent Tutoring Systems (ITS) *** AI-­assisted Apps (e.g., maths, text-­to-­speech, language learning) *** AI-­assisted Simulations (e.g., games-­based learning, VR, AR) *** AI to Support Learners with Disabilities *** Automatic Essay Writing (AEW) *** Chatbots * * ** * Learning Network Orchestrators * * ** * AI Teaching Assistant (including assessment assistant) * * */* Classroom Orchestration ** INSTITUTION-­FOCUSED AIED Admissions (e.g., student selection) *** Course-­planning, Scheduling, Timetabling *** School Security *** Identifying Dropouts and Students at risk *** e-­Proctoring *** Source: Authors. 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 551 3.1 | Student focused AIED Before exploring the various types of student-­focused AIED, i.e., AI-­assisted tools specifically designed to support students, a brief digression is necessary. It is important to note that not all AI-­assisted technologies used by stu- dents have been designed for students. Instead, it might be said that these technologies have been ‘repurposed’ for learning. Such technologies are not usually considered AIED but still need to be accounted for in any compre- hensive summary of student-­focused AIED. An example of the possibly most sophisticated AI-­assisted technology that has been repurposed for education is the suite of collaborative tools that includes Google Docs and Google Sheets (Google, 2022), alongside similar offerings from organisations such as Tencent (Tencent, 2022). In addition, there are the social networking platforms such as WhatsApp (WhatsApp, 2022) and WeChat (WeChat, 2022), and content sharing platforms such as YouTube (YouTube, 2022) and TikTok (TikTok, 2022), all of which in dif- ferent ways are increasingly being used to support student learning (a growth that was accelerated during the COVID-­19 school shutdowns). Finally, there are also various other AI-­assisted technologies being repurposed for education, for example activity trackers (e.g., Moki, 2022), although usually with limited evidence for how they support teaching or learning. We proceed by elaborating the following student-­focused AIED: intelligent tutoring systems, AI-­assisted apps, AI-­assisted simulations, AI to support learners with disabilities, automatic essay writ- ing, chatbots, automatic formative assessment, learning network orchestrators, dialogue-­based tutoring systems, exploratory learning environments, and AI-­assisted lifelong learning assistants. 3.1.1 | Intelligent tutoring systems (ITS) The commercially available so-­called intelligent tutoring systems (ITS) are the most common applications of AI in education, and probably the most well-­funded. Typically, they provide computer-­based step-­by-­step tutorials through topics in well-­defined structured subjects such as mathematics. An ITS delivers a sequence of informa- tion, activities and quizzes adapted to each individual student. While the student engages with a particular activ- ity, the system captures thousands of data points such as what is clicked, what is typed, which tasks have been answered correctly, and any misconceptions that have been demonstrated. This data is analysed to determine the next information, activity and quiz to be delivered, thus generating a personalised pathway through the material to be learned, and the process is repeated. ITS sometimes include teacher dashboards, so that teachers can see what the student has achieved. An example of a commercial ITS (there are many) is Spark, from the French company Domoscio. Spark individualises learning pathways and provides teachers with a dashboard of learning analytics (Domoscio, 2022). Another example is the Gooru Navigator, which aims to be the Google Maps for learning (Songer et al., 2020). Gooru extensively uses data-­driven AI technologies on its platform; for example, to analyse topics covered by open educational resources and to map these with individual learner profiles and competence needs. At present, Gooru claims to host about four million AI-­curated learning resources. Some ITS also include what's known as an open learner model, designed to enable the student to view and better understand what they have achieved (Bull & Kay, 2010). 3.1.2 | AI-­assisted apps There is a fast-­growing range of commercially available AI-­assisted educational apps available in the leading app stores. For example, there are the increasingly impressive AI-­assisted language translation tools, such as SayHi (SayHi, 2022), that some fear might further undermine the learning of foreign languages in schools, and the equally impressive AI-­assisted mathematics apps, such as Photomath (Photomath, 2022), that some fear will undermine the learning of mathematics. These fears mirror the concerns surrounding the introduction of calculators into 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 552 | HOLMES and TUOMI schools around fifty years ago: if the tool can do it (automatically calculate a long division, automatically translate between languages, or automatically solve equations), perhaps there is no need for children to learn how to do it, thus undermining learning (Watters, 2015). It was exactly this concern, that the use of technology to help students might actually undermine learning, that led to the Chinese Ministry of Education banning AI-­assisted homework apps that automatically provide online answers to homework questions photographed and uploaded by students (Dan, 2021). 3.1.3 | AI-­assisted simulations (e.g., games-­based learning, VR, AR) Although perhaps not traditionally thought of as AI technologies, commercially available Virtual Reality (VR) and Augmented Reality (AR) simulations and digital games-­based learning are frequently combined with AI machine learning, image recognition and natural language processing, and are increasingly being used in educational set- tings. For example, AI-­assisted VR has been used to provide training for neurosurgical residents on a variety of neurosurgical procedures (e.g., McGuire & Alaraj, 2018), while AI-­assisted AR has been used to enable students to explore and manipulate three-­dimensional models of organic molecules in order to enhance their understand- ing of chemistry (e.g., Behmke et al., 2018). Google has developed more than a thousand VR and AR Expeditions suitable for educational contexts. Meanwhile, digital games-­based learning (DGBL) is also increasingly including AI technologies, to adapt gameplay to the individual student (LaPierre, 2021). 3.1.4 | AI to support learners with disabilities Many of the commercially available student focused AIED mentioned here (especially ITS) have been further developed to support students who have a learning disability (Barua et al., 2022); while other AI approaches have been used for the diagnosis of learning disabilities such as ADHD (e.g., Anuradha et al., 2010), dyslexia (Benfatto et al., 2016), and dysgraphia (Asselborn et al., 2020). In addition, there has been extensive research into the use of robots in education, especially to support children on the autism spectrum (e.g., Alabdulkareem et al., 2022). Meanwhile, there are a number of mainstream AI tools, such as text-­to-­speech apps and automatic image caption- ing, that have been ‘repurposed’ for students who have learning difficulties; along with a limited number of tar- geted AI-­assisted apps, for example some that automatically sign for children who have difficulties with hearing, such as StorySign by Huawei (Huawei, 2022). 3.1.5 | Automatic essay writing (AEW) Written essays remain an important component of educational assessment around the world, yet passing off someone else's writing as your own has long been common. The Internet has made this increasingly easy, with online commercial essay mills offering bespoke essays on any topic. The recent AI developments known as ‘large language models’, such as the GPT-­3 from Open AI discussed above, are poised to have an even greater im- pact (GPT-­3, 2020). There are already several commercial organisations that offer automatic essay writing (AEW) tools for students that, in response to a prompt such as an essay question, can automatically generate individual paragraphs or entire essays. Although currently the writing generated by AEW can be superficial and sometimes nonsensical (Marcus & Davis, 2020), sometimes it can be difficult to determine whether the generated text was written by an algorithm or a human student. However, whether AEW tools support or undermine student learn- ing is unclear. Nonetheless, given their increasing sophistication and what could be described as an arms race between AEWs and AEW detectors, they are likely to impact on how we assess students (Sharples, 2022). 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 553 3.1.6 | Chatbots AI-­assisted chatbots are researched and commercially available, and are increasingly being used in educational contexts for a variety of purposes (Hwang & Chang, 2021; Pérez et al., 2020). For example, chatbots have been developed to provide ongoing student support and guidance, in academic services, accommodation, facilities, examinations, IT, health and more. A student might, for example, ask about their lessons that morning, where tomorrow's exam is happening, or what mark they achieved in a recent assignment. An example of an educa- tion chatbot is Ada, named after the computer pioneer Ada Lovelace, which has been developed by a UK com- munity college, using IBM's Watson Conversation platform (Hussain, 2017). A second and infamous example is the AI-­assisted virtual teaching assistant (TA) developed at Georgia Tech (Goel & Joyner, 2017). The TA bot responded to student enquiries during a large computer science class as if it was a human teaching assistant—­ automatically answering questions for which it had answers in its database (such as when an assignment was due) and referring other questions to human TAs to answer. Such an approach might have great potential in large-­s cale online educational institutions where it can be difficult for human staff to respond to all student online questions. However, the fact that the virtual TA did not inform the students that they were communi- cating with an AI bot, and that it sometimes used tricks to mislead the students to think it was a human (e.g., delaying its responses) raises ethical questions. 3.1.7 | Automatic formative assessment Automatic formative assessment (AFA) applications are researched and commercially available applications that use natural language and semantic processing together with other AI-­assisted techniques to provide ac- tionable feedback on student writing or other student outputs. Despite their potential for supporting student learning, and probably because of the difficulties of automatically providing accurate and helpful feedback, there remain few commercial examples of AFA. One research example is Open Essayist (Foster, 2019). A key problem is that currently no AI system is capable of the depth of interpretation or accuracy of analysis that a teacher can give, instead typically relying on surface features of the writing or other output. An ex- ample of an AFA system that explicitly provides feedback on the surface features of writing is Grammarly (Grammarly, 2022a). Meanwhile, research at Stanford University evaluated an autograder AFA system that provided feedback on programming tasks completed by 12,000 students in a computer science class. The stu- dents agreed with the given feedback about 98 percent of the time, slightly more than their agreement with feedback from the human instructors (Metz, 2021). 3.1.8 | Learning network orchestrators By learning network orchestrators we mean AI systems that enable connections between people engaged in education. There are few researched and commercially available examples of this. One example is the Open Tutor (OT, formerly known as the Smart Learning Partner), developed by researchers at Beijing Normal University (Lu et al., 2018). If a student has not understood something in their classroom, they can open the OT app on their mobile phone, type in what they want to know, and the app connects them with a list of human tutors who can help, all rated by other students (much like a dating app). They then receive 20 minutes of one-­on-­one tuition, sharing screen and voice only. Inevitably, as this system involves human tutors, it is relatively expensive to scale. Nonetheless, what is particularly interesting is that the learner is in charge, deciding what they want to learn, while the AI (unlike with an ITS) plays a supporting role. 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 554 | HOLMES and TUOMI 3.1.9 | Dialogue-­based tutoring systems (DBTS) Dialogue-­based tutoring systems (DBTS) simulate a tutorial dialogue, usually typed but sometimes spoken, be- tween a human tutor and a student. The aim is to encourage the student to develop an in-­depth understanding of the topic in question, going beyond surface level knowledge that is the outcome of some ITS. Typically, as the student works step-­by-­step through an online task, DBTS use a Socratic tutoring principle, involving probing with questions rather than providing instruction. The student, therefore, is guided towards discovering for themselves the pre-­specified solution for the current problem. The most well-­known DBTS is AutoTutor, which has been researched at the University of Memphis for more than twenty years (Nye et al., 2014). We have not been able to identify currently available commercial examples of DBTS. 3.1.10 | Exploratory learning environments (ELE) Exploratory learning environments (ELEs) provide an alternative to the step-­by-­step approach adopted by ITS and DBTS. Rather than following a sequence, albeit one that is adapted to the individual student, students are encouraged to actively construct their own knowledge by exploring and manipulating elements of the learning environment. Exploratory or discovery learning is not new, but it remains controversial. Critics argue that, be- cause there is no explicit instruction and students are expected to discover principles for themselves, it causes cognitive overload and leads to poor learning outcomes (Kirschner et al., 2006; Mavrikis et al., 2022). However, this is where AI comes in, with many recent AI-­driven ELEs providing automatic feedback, addressing misconcep- tions and ­proposing alternative approaches during exploration. There are no known commercial ELEs, although a ­research example is FractionsLab (IOE, 2018). 3.1.11 | AI-­assisted lifelong learning assistants AI-­assisted Lifelong Learning Assistants, tools that perhaps students might have on their mobile phones that can provide a wide-­range of support and guidance, have long been suggested as a potentially powerful application of AI in education (Holmes et al., 2019). However, to date such a tool has received very little research effort. As concepts such as digital twins and metaverse are becoming increasingly popular, lifelong learning assistants are a potential area for AIED research. 3.2 | Teacher focused AIED Many student-­focused AIED, especially ITS, include interfaces, or dashboards, for teachers, often based on open learner models, that offer a dynamic representation of what individual students and groups of students have achieved, or their misconceptions (Bodily & Verbert, 2017). One novel approach uses augmented reality (AR) glasses worn by the teacher to superimpose dashboard-­like information over the heads of their students while the students engage with an ITS (Holstein et al., 2018). Although impressive, this is an example of using an AI technol- ogy to address an issue caused by an AI technology (here, to address the fact that, while students engage with an ITS, their teacher cannot easily see what the students are doing and so cannot easily provide appropriate sup- port). In any case, ITS and other dashboard-­enabled AIED are all mostly student-­focused. In fact, if for analysis we ignore the overlaps, there are few examples of genuinely teacher focused AIED. Here, we discuss six possibilities, many of which are controversial: plagiarism detection, smart curation of learning materials, classroom monitoring, automatic summative assessment, AI teaching assistants, and classroom orchestration. 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 555 3.2.1 | Plagiarism detection Commercially available plagiarism detection services are widely used by educators, and machine learning methods have been increasingly adapted for these systems during the last decade. The market is now dominated by Turnitin (Turnitin, 2022), with its various tools such as iThenticate and Ouriginal, but also detection software such as Plagiarism Checker X (Plagiarism Checker X, 2022) and the more student oriented Grammarly's plagiarism checker (Grammarly, 2022b) are extensively used. 3.2.2 | Smart curation of learning materials As is well-­known, the Internet is awash with educational content—­in multiple formats and languages, with dif- ferent levels of access, and of varying quality. The challenge for teachers and students is not finding content but easily finding high quality relevant content that can be used effectively. At least one research tool, X5GON (X5GON, 2022), and two commercial tools, Teacher Advisor (IBM, 2018) and Clever Owl (Clever Owl, 2022), have been developed that automatically scrape the web to find teaching and learning resources in response to teacher queries. 3.2.3 | Classroom monitoring In a few contexts, researched and commercially available AI-­assisted systems are increasingly being used for classroom-­based student monitoring. For example, AI-­assisted video applications have been developed to monitor where a student is looking, from which the system infers whether or not they are focused on the teacher or task at hand (Lieu, 2018). Elsewhere, and perhaps even more intrusively, students are being asked to wear portable EEG (electroencephalography) headsets to record their brain activity in order to ‘monitor’ their attention (Poulsen et al., 2017). For example, the US based BrainCo says its headsets can help teachers identify pupils who need extra help, with data presented on a dashboard that shows average brain activity for the whole class. The headsets show a blue light for pupils whose brain activity is lower than average, yellow for those at the average and red for those with above-­average brain activity (NeuroMaker, 2022). Similar headsets are widely used also in Chinese schools, where both teachers and parents can review student brain activity over the internet. In China, teachers say that the use of the headbands has forced students to become more disciplined, and the students now pay more attention and work harder in the classroom (Wall Street Journal, 2019). Leaving aside the self-­evident ethical issues, to which we return later, it is important here to note that these systems are already controversial because there is very little evidence that they are capable of doing what they claim to do. Meanwhile, at many universities, ­AI-­assisted systems are also being used to monitor a student's movements through the campus (sometimes by means of a mobile phone app), what they download from the online learning management system, what they buy from the cafeterias, and much more besides (Moriarty-­Mclaughlin, 2020). 3.2.4 | Automatic summative assessment There have long been hopes that AI could save teachers time and effort by automating the labour-­intensive—­ and hence costly—­marking of student assignments, homework and assessments (Watters, 2021). For this rea- son, automatic summative assessment (sometimes also known as ‘autograders’) is a well-­funded area of research, second only to ITS, and extensively commercialised. Autograders have been used for the assessment of written tasks (e.g., the US SATs) (Ramesh & Sanampudi, 2021), and in computer science and mathematics courses. Some 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 556 | HOLMES and TUOMI state-­of-­the-­art autograders also claim to diagnose the type of error and suggest to the student how to correct it, while others, depending on the domain, claim to score student answers correctly with about 90 percent accu- racy (Hsu et al., 2021). Nonetheless, the use of automatic scoring, especially when the assessment is high-­stakes, remains controversial. In fact, high-­stakes testing is one of the two high-­risk use cases in the proposed EU AI Act, and so would be regulated by its provisions. A commercial example of automatic summative assessment is e-­Rater (ETS, 2022). 3.2.5 | AI teaching and assessment assistant As noted, many AIED technologies are designed to save time for teachers. However, in so doing, they effectively take over teaching tasks, potentially reducing teachers to a functional role (Guilherme, 2019; Selwyn, 2019). An alternative approach is for the AI to support teachers in their teaching by augmenting teachers' expertise and skills with an AI teaching assistant. What such an AI teaching assistant might do still needs to be determined—­ this remains a speculative application in that we are not aware of any relevant research or commercial products. Nonetheless, a recently launched commercial tool points in an interesting direction. Instead of automatic assess- ment, as with autograders, Graide (Graide, 2022) supports the teacher in their assessment practices (e.g., by offer- ing phrases that the teacher has previously written and used which they can re-­use for the script currently being marked). In other words, it is the teacher who does the assessment, not the AI. 3.2.6 | Classroom orchestration Classroom orchestration refers to how a teacher manages activities (for individuals, groups or whole classes) for effective teaching practices, within the available constraints such as curriculum, assessment, time, space, energy and safety (Dillenbourg et al., 2011). This is at an early stage of development, nonetheless there is a growing body of research into how AI might assist classroom orchestration (Song, 2021). One example is the FACT system which, while students solve mathematical problems in small groups, makes recommendations to the teacher about which groups to visit and what to say (VanLehn et al., 2019). 3.3 | Institution focused AIED Institution-­focused AIED includes technologies that support the allocation of financial aid (Aulck et al., 2020), course-­planning, scheduling, and timetabling (Kitto et al., 2020; Pardos & Nam, 2020), and identifying dropouts and students at risk (Del Bonifro et al., 2020; Miguéis et al., 2018; Quille & Bergin, 2019). Such tools have a clear administrative function and share much with business-­orientated Artificial Intelligence. Accordingly, here, we will elaborate only two critical and controversial institution-­focused AIED: admissions (one of the high-­risk use cases defined in the proposed EU AI Act) and e-­Proctoring. 3.3.1 | Admissions Many higher education institutions, mainly in the US, use commercially available AI-­assisted admissions software to support their admission processes—­however, not without controversy (Pangburn, 2019). The idea is to reduce costs while making the admissions system more equitable, by helping to remove unseen human biases (such as groupthink and racial and gender biases) that can impact decisions. For example, the University of Texas at Austin 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 557 developed a system called GRADE to recommend whether an applicant should be admitted, based on their test scores, prior academic background and recommendation letters, claiming at least 74 percent time savings on reviews (Waters & Miikkulainen, 2014). However, by 2020, GRADE had been dropped because it was quietly replicating the very problems that it aimed to address. Nonetheless, AI is increasingly being used to support admissions (Marcinkowski et al., 2020), often using tools provided by commercial companies such as Salesforce (Salesforce, 2022). 3.3.2 | E-­proctoring Early on in the COVID-­19 pandemic, much education moved online, as did many examinations—­which led to many exam-­monitoring, or e-­Proctoring, companies seeing their businesses grow massively (Nigam et al., 2021). E-­proctoring aims to assure academic integrity by using AI-­assisted cameras and microphones to automatically monitor students—­scanning their faces and tracking keystrokes and mouse movements—­while they complete an online examination. However, these tools are hugely controversial (Kelley, 2021). They have been accused of intrusion, failing to work properly, discrimination, preventing students from taking their exams, and exacerbating mental health problems (Chin, 2020; Henry & Oliver, 2021). In fact, e-­Proctoring is probably one of the clearest examples of using AI to automate poor pedagogic practices, rather than using it to develop innovative approaches. 4 | ROA D B LO C K S O N TH E A I H I G H WAY As noted in the introduction, the purported benefits of AI in education have received much visibility (e.g., OECD, 2020, 2021). According to the AI entrepreneur Kai-­Fu Lee: Teaching consists of lectures, exercises, examinations, and tutoring. All four components require a lot of the teacher's time. However, many of the teacher's tasks can be automated with sufficiently advanced AI. Perhaps the greatest opportunity for AI in education is individualized learning […]. A personalized AI tutor could be assigned to each student […].Unlike human teachers, who have to consider the whole class, a virtual teacher can pay special attention to each student, whether it is fixing specific pronunciation problems, practicing multiplication, or writing essays. An AI teacher will notice what makes a student's pupils dilate and what makes a student's eyelids droop. It will de- duce a way to teach geometry to make one student learn faster, even though that method may fail on a thousand other students. To a student who loves basketball, math problems could be rewritten by NLP in terms of the basketball domain. AI will give a different homework assignment to each student, based on his or her pace, ensuring a given student achieves a full mastery of a topic before moving to the next. With ever-­more data, AI will make learning much more effective, engaging, and fun. (Lee & Qiufan, 2021, p. 118) Such a vision of the AI-­enabled future neatly summarises the beliefs about AIED held by many of its keenest advo- cates. It also raises some fundamental and controversial issues: automating teaching and teacher tasks, individu- alising education, biometric surveillance, learning as a mastery of a given topic and related measures of efficiency, to name just a few. Such issues have only recently become part of the mainstream AIED conversation (Blikstein & Blikstein, 2021; Holmes et al., 2021; Selwyn, 2019; Tuomi, 2018; Williamson & Eynon, 2020). Also UNESCO (Miao & Holmes, 2021), the Council of Europe (Holmes et al., 2022; Yeung, 2019), the European Commission (Vuorikari & Holmes, 2022), and the broader research community (Holmes & Porayska-­Pomsta, 2023), have started to critically assess the future potential of AI in education. Together, this literature is exploring what might be termed a critical 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 558 | HOLMES and TUOMI studies and humanistic perspective on the connections between AI and education. Many potential roadblocks on the road to visionary futures are discussed in the other articles of this issue of the European Journal of Education. Here we just briefly summarise some prominent ones: ethics, personalisation, efficacy and impact, AIED colonialism, and the commercialisation of education. 4.1 | Ethics In recent years, for AI in general there has been a growing focus on ethics, resulting in more than 80 sets of ethical AI principles (Ayling & Chapman, 2021; Jobin et al., 2019; Tsamados et al., 2022). Many of these have adopted a rights-­based approach to ethics, with human rights having a central role. However, despite fundamental implica- tions for students, educators, parents, and other stakeholders, relatively little has been published specifically on the ethics of AI in education, notable exceptions being (Adams et al., 2021; Aiken & Epstein, 2000; Holmes et al., 2021; Holmes & Porayska-­Pomsta, 2023; Holstein & Doroudi, 2021). In fact, to date, most AIED research and development has happened without serious engagement with the potential ethical consequences of using AI in education. This is in some contrast to the related field of learning analytics, where privacy and related ethi- cal issues have been widely debated (e.g., Drachsler & Greller, 2016; Prinsloo & Slade, 2017; Slade & Tait, 2019; Williamson, 2017). While, in Europe, there has been growing interest in developing teacher-­oriented guidelines and regulations for the ethical development and deployment of AI in education (e.g., EC, 2022), it remains the case that no appropriate regulations have yet been enacted anywhere in the world (Holmes, Bektik, et al., 2018). A Council of Europe report has recently explored AI and education in terms of human rights (Holmes et al., 2022), drawing on the UN's Universal Declaration of Human Rights (1948), the European Convention on Human Rights (Council of Europe, 1953), and the UN's Convention on the Rights of the Child (1989). Here we highlight some key issues, with examples, that the report discusses in detail. Right to human dignity. Teaching, assessment and accreditation should not be delegated to an AI system. Right to autonomy. Children should be afforded the right to avoid being individually profiled, to avoid dictated learning pathways, and to protect their development and future lives. Right to be heard. Children should be afforded the right not to engage with an AI system, without that negatively affecting their education. Right not to suffer from discrimination. All children should be afforded the opportunity to benefit from the use of technology, not just those from the socio-­economic groups who can afford it. Right to data privacy and data protection. Children should be afforded the right for their data not to be aggre- gated and used for commercial purposes without their direct benefit. Right to transparency and explainability. Children and their parents should be able to understand and challenge any decision made by an AIED system. While most discussions centred on the ethics of AIED and the related field of learning analytics focus on data (e.g., biases, privacy, and data ownership) and how that data is analysed (e.g., fairness, transparency, and trust and trustworthiness), the ethics of AIED cannot be reduced to questions about data and computational approaches alone. In other words, investigating the ethics of AIED data and computations is necessary but not sufficient (Holmes et al., 2021). The ethics of AIED also needs to address the ethics of education. This raises important ques- tions centred on pedagogy (Is the instructionist pedagogy adopted by most AIED ethically grounded?), assess- ments (What should be assessed and how?), knowledge (What counts as knowledge?), and student and teacher agency (Who should be in control?) (Holmes & Porayska-­Pomsta, 2023). Although the general ethical concerns related to AI are now widely debated, education has important social roles and aims at human development, which makes the related ethical challenges particularly difficult both conceptually and in practice. Because of this, it has 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 559 been suggested that an adequate ethical framework for AIED needs to be built using learning and human devel- opment as a starting point (Tuomi, 2023). This also means that ethics frameworks for general AI need to be more explicit on their implicit models of progress and development. 4.2 | Personalisation Despite the meaning of personalised learning remaining unclear (Holmes, Anastopoulou, et al., 2018), it increasingly informs the mainstream education narrative (e.g., European Parliament, 2021; UNICEF, 2022). In fact, as discussed in the introduction, the development of technologies to personalise learning to the strengths and weaknesses of individual students began almost 100 years ago, with the so-­called teaching machines devised by Sidney Pressey and B. F. Skinner (Watters, 2021). For various reasons, these machines failed to be widely accepted, and the personalised learning agenda more or less disappeared. It re-­emerged decades later, mainly from Silicon Valley, partly because the internet made mass customisation possible across a broad range of industries. A question often posed is, if we can have personalised recommendations on Netflix or Amazon, why cannot we do a similar thing in education? The personalisation of learning pathways offered by much current AIED, however, is a very limited interpre- tation of personalisation. Personalisation, more broadly understood, is about subjectification (Biesta, 2011) and helping each individual student to achieve their own potential, to self-­actualise, and to enhance their agency. This is something that few existing AIED tools do. Instead, while they provide adaptive pathways through the materials to be learned, most AIED tools have a tendency to drive the homogenisation of students. A critical interpretation of such AIED tools could suggest that they aim to ensure the students fit in the right box (pass their exams), pre- pared for their designated role in the world of work. There are three further related issues that should be noted. First, the personalisation agenda, as expressed, for example, in the quote from Kai Fu-­Lee above, assumes that it is only possible with technology. However, most teachers personalise their teaching moment by moment in response to each individual student, helping the student (if the particular education system permits) to self-­actualise, to become the best that they can be. Second, the so-­ called individual pathways provided by almost all AIED systems are mostly based on the averages of prior learners. Accordingly, while they might be applicable to groups, their usefulness for or applicability to individual students is not clear. Third, education is also about collaboration and the other social interaction aspects of teaching and learning, which are often ignored by current ITS and most other AIED. While ITS might be useful when the student is without a teacher or their peers, perhaps when completing homework, it is not clear why some schools use them in classrooms, which are by definition social spaces. 4.3 | Efficacy and impact Evidence for the positive impact of AIED use is important for policy, but also for the ethical use of AI. Investment of time and, for example, teacher effort, requires acceptable justification. As the many articles published in the International Journal of Artificial Intelligence in Education show, academic researchers have conducted many studies into the efficacy of various AIED systems. Many of these studies have been synthesised in numerous metanalyses and meta-­metanalyses (e.g., Kulik & Fletcher, 2016; Ma et al., 2014). According to du Boulay, The overall conclusion of these metareviews and analyses is that AIED systems perform better than […] human teachers working in large classes. They perform slightly worse than one-­on-­one human tutors […]. Of course, good post-­test results are not the only criteria for judging whether an educa- tional technology will be or should be adopted. (du Boulay, 2016, p. 80) 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 560 | HOLMES and TUOMI The vast majority of impact studies have been conducted by the developers of the particular technology being studied (increasingly from commercial organisations), and most often with relatively small numbers of learners. This poten- tially reduces their generalisability. In only a few instances have the studies been independently conducted and/or at large scale (e.g., Egelandsdal et al., 2019; Pane et al., 2013; Roschelle et al., 2017). Most of these large independent studies have been conducted in the US, limiting their transferability to other countries. It, therefore, remains true to say that “Many of the claims of the revolutionary potential of AI in education are based on conjecture, speculation, and optimism” (Nemorin, 2021, cited in Miao & Holmes, 2021, p. 13). A problem with AIED research is also that it has almost always focused on the efficacy of the AI tool to en- hance individual student's academic achievements in the narrow domain addressed by the tool. Very rarely does the research consider the wider implications of AI in classroom settings and its broader impact on teachers and students: “Much of what exists now as ‘evidence-­based’ is mostly related to how AI can work in education in a technical capacity without pausing to ask and comprehensively answer the question of whether AI is needed in education at all” (Nemorin, 2021, cited in Miao & Holmes, 2021, p. 26). One important potential impact of AIED is on human cognition and brain development. It has long been thought, ever since Socrates argued that writing led to forgetfulness (Plato, 257 C.E.), that technology has a significant impact on human development and cognition. Inevitably, however, this impact is likely to be complex. For example, while one study has suggested that greater use of Global Positioning System (GPS) navigation de- vices and applications leads to a decline in spatial memory (Dahmani & Bohbot, 2020), another has shown that the use of smartphone alerts frees cognitive resources for other tasks (Fröscher et al., 2022). In particular, how- ever, there are several outstanding questions with regards to the impact of technology on children's brains and cognition—­which is especially important because children's cognitive structures and capabilities are by definition still in development. These questions include whether using technology is the cause of various cognitive and behavioural outcomes such as attention problems, whether the use of technology is implicated in restructuring parts of children's brains, whether there are real health risks associated with technology use, and if so what the causal mechanisms might be (Gottschalk, 2019). These questions are also likely to be critical for AIED, suggesting the need for new research in this area. 4.4 | Techno-­solutionism The conclusion that “AIED systems perform better than […] human teachers” (du Boulay, 2016, p. 80) has been used to justify their increasingly wide deployment around the world. In particular, it has been argued that AIED might effectively fill the void in contexts such as rural areas in developing countries where there are insufficient access to experienced or qualified teachers necessary to provide learners with the quality education that is their human right (XPRIZE, 2015). However, while the immediate cohort in such a context might benefit from being given ac- cess to an AIED tool, there are many challenges. To begin with, many rural areas do not yet have the necessary infrastructure (electricity and access to the Internet). Even when this is available, rarely are there the skilled sup- port staff needed for the deployment, management and support of the required hardware and software. Most importantly, AIED in such contexts might address the apparent symptoms of the problem (for example, learners not receiving a quality education), but not necessarily the underlying and long-­term socio-­political causes (the lack of experienced and qualified teachers). In practice, the way in which problems are articulated often depends on the interests of technology provid- ers. The deeper social and cultural factors are rarely addressed as they are difficult to change without a broad participation of the stakeholders and policy change. As Krahulcova notes: “Unfortunately, most complex real-­world problems require complex real-­world solutions” (Krahulcova, 2021, paragraph 3). Accordingly, the problem of a lack of qualified teachers is probably best addressed by focusing on the professional development and support offered 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License HOLMES and TUOMI | 561 to the inexperienced classroom teachers—­which might be supported by appropriate AI. This, in turn, might benefit from establishing AI-­assisted networks of colleagues and pedagogy experts across the country. The emphasis would again be on augmentation: using the technology to support rather than replace teachers. 4.5 | AIED colonialism AIED corporations are increasingly selling their AIED tools globally, creating what has been called an AIED coloni- alism: Global North companies exporting their AIED tools into contexts in the Global South, creating asymmetries in power across and between nations. It has been noted that, all too often, “digital technologies function in ways that perpetuate the racial and colonial formations of the past” (Zembylas, 2021, p. 1). This is exacerbated by the fact that the overwhelming balance of AIED research is also carried out in the Global North, and rarely addresses cultural diversity or local policies and practices in any meaningful way (Blanchard, 2015). AIED colonialism might involve the adoption of AIED tools created in one context in other places, leading to market and economic gains for the Global North corporations, with the extraction of local data and capital out of the host country (Nemorin et al., 2022). These gains and extractions may begin with individual schools embed- ding AIED tools into teachers' everyday practices before expanding to draw in entire state education systems in which single products are adopted across all schools. However, AIED colonialism might not necessarily depend on specific tools being imported into Global South countries. More subtly, it might simply involve the language in which most classroom AIED tends to be trained—­mainly American English (Cotterell et al., 2020). In any case, the impact of the English-­trained models used by AIED tools in non-­English contexts and on the children who use them remains unknown (Naismith & Juffs, 2021). AIED colonialism might also involve the imposition of particular approaches to pedagogy—­at present often instructionist and behaviourist—­as embedded in most current commercial AIED tutoring systems. Finally, colonial- ism might also fall along a spectrum. For example, when third party countries wish to take advantage of AI, some US corporations expect them to adopt ready-­made existing AI products more or less as they have been developed for the US, while Chinese corporations, at least across the regions of China, are more likely to customise their products to address local circumstances and priorities (Knox, 2020; Lee, 2018). Either way, it is likely that rela- tively well funded US, Chinese, or other Global North AIED tools will crowd out the less well-­funded but locally trained and potentially more locally sensitive AIED tools. 4.6 | Commercialisation of education A final potential roadblock noted in this article is the commercialisation of education by stealth. While learner-­ focused AI has been the subject of research for around 40 years, almost a decade ago it shifted from having been a focus of research labs to be developed into commercial products by a growing number of multi-­million-­dollar-­ funded AIED companies. It is mostly these products that are being implemented in schools around the world, frequently by government (local and national) agencies. This is important to note for several reasons. First, while original research is undertaken in academia with the explicit aim of enhancing teaching and learn- ing, today's commercial organisations by definition focus on generating profits. As Holmes and colleagues ask: Given that the children's interactions with these AI systems generate both technical knowledge about how the product works and market knowledge about how the product is used, are children in classrooms around the world being recruited by stealth to create and supply business intelligence designed to support the corporations' bottom lines—­and is this being prioritised over the child's learning and cognitive development? (Holmes et al., 2022, p. 24) 14653435, 2022, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/ejed.12533 by Hong Kong Metropolitan University, Wiley Online Library on [07/01/2025]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License 562 | HOLMES and TUOMI For example, a study on the complex ways in which Google binds AIED users and developers into its infrastructure has shown that the Google Classroom has become both a global collector of student data and also a critical control point where the rules of participation are defined (Perrotta et al., 2021; Williamson, 2021). Although some flows of data from EU countries to other countries are limited by the provisions of the EU General Data Protection Act (GDPR), in practice, it is very difficult to know how complex internet-­based services use and process data (Day, 2021). Commercial organisations rarely share information about their proprietary systems and their effectiveness. Beyond limiting interoperability, this has potentially important consequences for the social control of education and educational innovation. Data-­driven AI, in particular, has important scale benefits, and in the current networked eco- systems this potentially leads to natural monopolies. The future of AIED, therefore, is also an economic and even a geopolitical issue. Commercial AIED organisations are not only shaping individual learners but are also beginning to influence governance and national policies. The societal and cultural impact may also exceed the more surface level economic ones. For example, Baker suggested that “they will impose their standards on what counts as knowledge at all” (Baker, 2000, p. 127). In short, the commercialisation of education through an AIED back door is fraught with both practical and ideological issues. 5 | CO N C LU D I N G V I S I O N As the progress in data-­driven AI has led to exponentially growing computational requirements, it is now increas- ingly understood that the future of AI cannot necessarily be predicted by extrapolating the developments of the last decade. While some argue cogently that data-­driven AI will, given sufficient data, soon be able to surpass human intelligence (e.g., LeCun & Browning, 2022); others argue similarly convincingly that data-­driven AI is hit- ting a developmental ceiling, and that progress towards human-­level intelligence is likely only to be achieved with a new paradigm, which might involve a combination of both approaches (e.g., Marcus, 2022). This is sometimes referred to as ‘neuro-­symbolic AI’ (Susskind et al., 2021). Presumably, only time will tell. In educational applications, the combination of knowledge-­based and data-­driven approaches, however, rep- resents a natural path forward. Data-­driven AI provides important basic information processing functionality, such as pattern recognition. Education, in contrast, has typically focused on the gradual development of domain-­ specific theoretical conceptual structures (e.g., Davydov, 1982; Tuomi, 2022). Many recent breakthroughs in data-­ driven AI, such as the capability to locate a cat in a picture or distinguish words in a spoken sentence, are simple tasks for an infant already years before the child enters school. In educational contexts, the development of AI, therefore, may more constructively be seen as a joint development of human and artificial cognition. This suggests that the future of AIED should be understood from the point of view of AI-­supported augmentation of human cognition and learning, an approach that has been an important line of thinking in AI throughout its history (Bush, 1945; Engelbart, 1963; Winograd & Flores, 1986). From a purely technical point of view, given that the basic architecture of the internet is about to change, the world will soon be different. Ten years from now, data is exp

Use Quizgecko on...
Browser
Browser