Skip to the main content.

7 min read

AI in education: What approach are regulators taking to AI?

AI in education: What approach are regulators taking to AI?
AI in education: What approach are regulators taking to AI?
13:17

In needing to strike a balance between the opportunities that AI can bring, with the risks and challenges it creates, countries globally have expressed the need to have some control over how AI is (and can be) used in society, with a keen focus on the education sector.

This article is part of a series of insights from 9ine exploring the impact of Artificial Intelligence (AI) in education. From the opportunities it can bring, to the potential risks and challenges it creates where not implemented appropriately, this series will bring you up to speed and provide practical tips to remain compliant and ethical when using AI in education. 

In our previous article, we explored AI’s impact on cybersecurity. This week we are looking at how regulators are approaching AI around the world, following on from our recent Webinar: ‘The Impact of AI Regulations on Schools – Exploring the EU AI Act and Global Trends for Responsible Use of AI in Education’. You can watch a recording here

Why regulate AI?

Artificial Intelligence (AI) has become increasingly important and powerful in today’s world, with the potential to revolutionise many industries, of which education is a key one. Being pivotal to the ‘fourth industrial revolution’, leaders around the world have acknowledged the importance of AI and the need to have some control of it, to ensure the opportunities AI offers are realised whilst avoiding any potential harms. Due to the complexities of AI as a technology and the infinite use cases in which it can be used (alongside the difficulties lawmakers have already experienced with keeping pace with technology), regulating AI is not straightforward.

So how are regulators approaching this globally? 

There has been a massive increase in the number of countries with laws containing the term ‘AI’, growing from 25 countries in 2022 to 127 in 2023, although legislation is not the only form of regulation being used, with UNESCO introducing nine distinct regulatory approaches to AI. Hard law or soft law? Rules-based or principle-based? Overarching, sector or use case based? These are just some of the questions which leaders are asking themselves when it comes to regulating AI and what we can see is that different approaches are emerging, to try and balance pro-innovation ideals with the need to protect people and societies from any potential harms from AI. 

European Union (EU) 

There is no doubt that the EU has the first mover advantage in legislating for AI, with its prescriptive, overarching risk and rules-based EU AI Act. The Act is similar in structure and scope to the EU’s General Data Protection Regulation (GDPR), but instead comes into force at different stages over a two year period and can attract significantly more fines than the GDPR, of up to €35,000,000 or 7% of global annual turnover. Categorising AI Systems into four risk levels (unacceptable, high, limited and minimal), the EU AI Act takes a risk-based approach and tailors different requirements to each category, from an absolute ban on unacceptable risk AI systems from February 2025, through to lighter regulation of transparency and code of conduct requirements. Where the EU AI Act directly applies to schools, they need to be aware of their obligations as ‘Providers’ of AI systems. The Act has also specifically called out education as an area of focus, with the use of AI Systems to infer emotions in education classed as an ‘unacceptable risk’ and the use of other AI systems in education deemed ‘high-risk’, attracting higher levels of compliance requirements.  

Even where the Act doesn’t currently legally apply to a school, given the ripple effect that the GDPR had on privacy regulation globally (and that compliance with it will be required to trade with the EU) it is likely that countries will follow the EU’s lead and replicate the Act’s requirements at least in part (if not in full) over time. Also, as ‘Providers’ of AI systems, EdTech vendors will need to provide detailed technical and functional documentation on their AI models to EU-based schools by August 2025, which they will likely do via an update to their terms and conditions. If similar to the approach taken to the GDPR, EdTech vendors are likely to be pragmatic and use the same T&Cs for all schools globally, whether the product is used in the EU or not (with the exception of the US), attempting to discharge their liability and inadvertently imposing the ‘golden standard’ of the EU AI-Act on schools regardless. Schools globally therefore need to be aware of the impact and requirements of the EU-AI Act and be prepared for an influx of documentation from EdTech vendors, including identifying who it will be sent to and reviewed by within the school. 

UK 

In contrast to the EU, the UK has taken a cross-sector principle-based approach in an attempt to remain agile whilst robust enough to address key concerns around potential societal harms through a focus on ‘pro-innovation’ and ‘pro-safety’ (although it is worth noting that AI is regulated to some extent through other laws in place where it uses personal data such as UK GDPR and DPA 2018). The AI-focused principles are: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. 

With regulators being encouraged to interpret and apply these principles, OFSTED have published their strategic approach to AI regulation, referring to multiple use cases and concluding that their inspection frameworks and regulations already enable them to assess AI’s impact on learners. Whilst it may seem that additional requirements for schools are not being introduced, OFSTED will be looking at the schools use of AI through the lens of these principles and frameworks, checking that schools have evaluated and mitigated the risk of deploying AI specifically.

There has also been some suggestion of more prescriptive regulation on AI in the UK, and the Information Commissioner has been clear that there is a focus on the regulation of AI describing it as one of the biggest transformations his office has ever seen, potentially meaning further compliance requirements and strong enforcement are on the way.   

What about the rest of the world?

Whilst the EU is the front-runner in legislating for AI, and the current approach of the UK provides an interesting contrast, various countries are taking different approaches to the regulation of AI. 

US 

Whilst the US has acknowledged the importance of AI regulation, we are yet to see an overarching AI regulation similar to that of the EU, and similar to OFSTED, the US Department of Education is using its existing authorities to mitigate the risks of AI in schools. Various laws on AI have been enacted at the Federal level, either as standalone legislation or as AI-related provisions in broader acts. Existing laws in other areas such as privacy and intellectual property laws have also been used to regulate AI with limited application and States have introduced various bills on AI specifically, with Colorado enacting the first comprehensive US AI legislation which will go into effect in 2026.

APAC  

The approach in APAC is fragmented, with countries displaying various degrees of readiness for AI governance. China is the clear frontrunner, being the first to introduce a registration regime and issuing measures in respect of specific uses in AI. Elsewhere, Singapore has taken the approach of regulating AI through existing laws (similar to Hong Kong) and sector-based frameworks which provide general guidance without binding effect. Other APAC countries, like Taiwan and South Korea have proposed basic AI laws, whereas Australia has also released a voluntary framework, but has released proposals for mandatory guardrails for Safe and Responsible AI in Australia’, acknowledging that their current framework is not fit for purpose.  

Middle East

There is no unified AI regulatory framework in the Middle East, but many countries in the region have national strategies and initiatives. The UAE is the regional leader in AI regulation, establishing the world’s first Ministry of Artificial Intelligence in 2017, with the strategy being to balance innovation with ethical standards. Saudi Arabia has also notably introduced draft AI Ethics Principles and Generative AI Guidelines, with the Saudi Data and Artificial Intelligence Authority (SDAIA) having the authority to establish and enforce future AI laws.    

LATAM 

Latin American countries have introduced a series of bills, strategies, reports and policies in approaching the regulation of AI, with most countries drawing inspiration from the EU AI Act. Brazil has emerged as the clear leader in AI regulation though, with its fourth bill on AI regulation being introduced in 2023, with clear similarities to the EU AI Act by taking a risk-based approach with proposed fines of up to R$50,000,000.00 per infraction, or up to 2% of annual revenue. 

Africa 

The African Union’s (AU) AU Development Agency published a draft policy to advance the ideal of an Africa-centric path for the development of emerging AI technologies in February, setting out a blueprint for AI regulation by Member States. A number of countries in Africa have taken steps to regulate AI via strategies, whilst others are still in the process of consultation, with some yet to make any announcements on their approach towards AI regulation. 

Geopolitics of AI and AI Nationalism 

With these different approaches to the regulation of AI, we are starting to see the geopolitics of AI evolving, with some companies choosing not to launch certain AI products and features in countries where regulation is seen as too restrictive and unpredictable. Indeed, Meta has witheld it’s multimodal AI models from being launched in the European Union citing the ‘unpredictable nature’ of the EU Privacy Regulations, moving forward with text-only versions of it’s Llama AI models in the EU. Apple has also decided not to launch certain features from its iOS 18 update in the EU, expressing concerns about the Digital Marketing Act (DMA). Having been hit by large fines in the EU for GDPR violations previously, it is clear large companies are cautious. Given the power of AI and its opportunities in education, schools need to consider the impact of this AI Nationalism, which may result in restricted access to global AI tools. This could negatively impact equity in education, widen the digital divide and create gaps in the curriculum where tools are not available or lead to higher costs because the cost of compliance for vendors is higher.  

What can schools do? 

The regulation of AI for schools can be a daunting topic and moving target, but whilst the mechanisms are different, there are some clear common themes in the approaches countries are taking. From the risks of fines, harms and reputational damage from non-compliance to the risk of litigation from parents and students, it is clear that schools need to understand the requirements that they are under legally, contractually and ethically and how to comply, including who within their school has responsibility for this, ensuring that they have the expertise required. 

At 9ine we offer a number of products and services that can help schools with the challenges that AI presents, specific solutions include: 

  • 9ine Academy LMS: Our AI Pathway is your school's learning partner for AI ethics and governance. With differentiated course levels you can enrol staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. There’s also specialist courses for Ai in Safeguarding, Child Protection and technology.
  • Privacy Academy: A virtual, in-person 6 month monthly training and professional development to manage privacy law, AI, cybersecurity and safeguarding risks of harm at your school. Following a project based methodology we train and coach you on implementing a Privacy Management Programme, considering AI, cyber and safeguarding risks of harm.
  • Tech Academy: Training and professional development to enhance school IT teams' skills in security and operations.
  • Vendor Management: Removes the pain, and time, from evaluating and vetting third party vendor contracts, privacy notices, information security policies and other compliance documents. Vendor Management provides a thorough, ‘traffic light’ based approach to inform you of vendor privacy, cyber, AI, and safeguarding risks. Vendor Management supports you to demonstrate to parents, staff and regulators how you effectively evaluate and manage technology you choose to deploy.

In our next article, we will take a deep dive into The Impact of AI on Privacy, Ethics and Data Protection in Education. 

9ine company overview
9ine equips schools to stay safe, secure and compliant. We give schools access to all the expertise they need to meet their technology, cyber, data privacy, governance, risk & compliance needs - in one simple to use platform. For additional information, please visit www.9ine.com or follow us on LinkedIn @9ine. 

AI in education: Finding your AI systems and categorising them for risk

AI in education: Finding your AI systems and categorising them for risk

The use of AI is being increasingly regulated (particularly in the education sector), to counter the risks and challenges that the use of AI in...

Read More
AI in education: What are the opportunities?

AI in education: What are the opportunities?

Being at the heart of the transformation of human societies, UNESCO have highlighted the important role that educational institutions play in the...

Read More
AI in education: The impact of AI on privacy, data protection and ethics in education

AI in education: The impact of AI on privacy, data protection and ethics in education

Given the large amounts of personal data often used by AI, and the impacts its outputs can have on humans and society, considering how the use of AI...

Read More