In the United Kingdom, The Department for Education opened a conversation with all levels of the education system using a Call for Evidence on the use of AI in Education. The key use case themes that emerged included:

  • Creating educational resources
  • Lesson and curriculum planning
  • “Live” use in lessons
  • Assessment, marking, and feedback
  • Administrative tasks
  • AI skills training and AI literacy
  • Research
  • Proofreading and editing
  • Supporting coding and the use of AI as a
  • Study aid and search tool

Respondents indicated that AI was used to design handouts, worksheets, presentations, model answers, and tests. AI was also used to create bespoke resources for pupils with special educational needs.

In the primary education system, AI was used to create reading materials and comprehension questions for pupils. In the Secondary Education System, AI was used to plan and design lessons, laboratory experiments, course outlines, and schemes of work.

The “Live” use of AI in the classroom involved the use of AI to generate a structure for an essay, to streamline complex ideas or concepts, to convert texts to images as lesson stimuli, and as an aid for specific tasks and activities during a lesson.

AI was also used to prepare student reports, generate sample questions for exams, produce banks of multiple-choice questions, grade student work, and produce marking rubrics. Some educators reported using AI to detect plagiarism and to guide learners away from academic malpractice.

Many educators admitted they lack the skills or knowledge to use AI tools effectively. This skills deficit includes knowing how to use prompt engineering, and how to use AI in line with good pedagogical practice.

Other teachers used AI to create bespoke resources to support engagement and learning and to produce personalized study and revision plans for pupils. AI also supported accessibility and inclusion. Students from disadvantaged backgrounds were able to use AI to assist them with their work, where they otherwise may not have the needed scaffolding at home.

Law students at Oxford observed that AI was hugely unfamiliar with many judgments. Moreover, AI did not have access to chapters in edited books and white papers that the professors at Oxford were using to teach. And so AI was not very helpful to those students.

In university settings across Europe, some academics see a parallel between AI and the advent of hand-held calculators, which began entering learning spaces in the 1970s. At the Strategic Development Office at Lund University, the aim is to put the focus back on learning and away from academic dishonesty. Faculty decide which students can use AI to support the research they are conducting.

The University of Hong Kong is allowing AI with a “Usage Dashboard”. The Dashboard monitors and predicts the use of AI among HKU staff and students. In April 2023 Turnitin launched a tool that uses AI to detect AI-generated content. During the testing phase glitches occurred over what are known as false positives when essays composed by faculty were flagged as written by AI. The tool was offered to over 10,000 educational institutions.

UNESCO’s guidance for the use of generative AI in education and research highlights the risk that AI will deepen societal divisions as educational and economic success increasingly depends on access to computers, electrical power, and internet connectivity that the most disadvantaged do not have.

UNESCO also highlights that present prototypes or foundation models can be used as a pivot for developing domain-specific models. Already, developers have started to refine a foundation model to build “EdGPT”. EdGPT will be trained using smaller amounts of high-quality, domain-specific education data.

EdGPT models targeting curriculum co-design will allow learners in tutoring bars to produce occasions for learning, and interactive tasks that closely align with an effective pedagogical approach and specific curricular objectives and levels of challenge for particular students.

EduChat is a foundation model developed by East China Normal University whose code, data, and limitations are shared as open source. MathGPT is being developed by the TAL Education Group and focuses on problem-solving.

Apart from refining foundation models, the new models will attend to domain-specific knowledge, and incorporate information on multiple intelligences and cognitive preferences, and this must be reflected in the algorithms. The challenge is to gauge the extent to which EdGPT models can target student-centred pedagogy and positive teacher-student interactions.

The deeper conundrum is to determine the extent to which learner and teacher data may be ethically harvested and used to inform an EdGPT model. Finally, there is a need for robust guardrails to barricade EdGPT from undermining students’ human rights.

The UNESCO advice (p.14) underscores that the vining of AI across technologically advanced countries has accelerated exponentially the production and processing of data. This has deepened the concentration of AI wealth in the countries of the Global North. Data-poor regions have been omitted, and put at long-term risk of being colonized by the standards, values, and norms of AI models that emanate from the Global North. This makes the models unfit for locally relevant AI algorithms in data-poor publics in many parts of the Global South and disadvantaged communities in the Global North.