The use of AI is being increasingly regulated (particularly in the education sector), to counter the risks and challenges that the use of AI in education can create. In order to comply with these requirements, schools need to understand where and how they are using AI, so that they can understand which compliance requirements they are subject to.
This article is part of a series of insights from 9ine exploring the impact of Artificial Intelligence (AI) in education. From the opportunities it can bring, to the potential risks and challenges it creates where not implemented appropriately, this series will bring you up to speed and provide practical tips to remain compliant and ethical when using AI in education.
In our previous article, we looked at the leadership and oversight of AI, and whether schools need an AI Officer? In this article we take a practical look at the starting point for schools in beginning to look at the compliance and governance of their use of AI, which is asking: Are we using AI? And if so, where and how?
We have previously discussed the opportunities that AI can bring for schools, but given that good strategy, leadership and oversight are key to realising these, it is important that schools have a holistic (as well as detailed) view of where they are currently using AI. Schools need to know where they are using AI, for what purposes and processes, and whether any third party EdTech is being used. Strategically, this allows schools to understand the maturity of their AI strategy and how they compare to other schools when it comes to the use of AI in education.
Even if a school decides not to capitalise on the benefits for their strategy and competitive edge of understanding where they are using AI (potentially due to time, budget and expertise constraints) it is likely that they are legally required to do so. Wherever a school is based geographically, it is likely to be subject to some form of regulation and compliance requirements when it comes to how they are using AI. These may be new requirements (introduced by AI-specific legislation), existing legal requirements (such as from data protection and privacy laws) or principles that countries have introduced to have some influence and control over how AI is being used.
Even in the remote chance that there is no AI or privacy and data protection legislation requiring your school to understand where you are using AI, because of the potential harms and risks which the use of AI introduces, schools will still need to understand how and where they are using it. This is due to the responsibility all schools have for safeguarding and child protection, to protect students from harm and abuse, both inside and outside of the school environment by preventing harm, creating a safe environment and identifying and addressing potential risks to children.
Whether it is to gain a competitive advantage, comply with legal requirements, or their responsibility to safeguard children against harm (or all three), schools need to know where and how they are using AI.
A good place to start when looking to understand where and how your school is using AI is to examine your Records of Processing Activities (RoPAs), a key compliance requirement of much data protection and privacy legislation globally. RoPAs are usually made up of data maps and an inventory of the processing activities your school is carrying out and provide an overview of what your school is doing with personal data. They should include:
RoPAs should also include:
If you haven’t completed your RoPAs, then the need to categorise your systems is a good nudge to do so. This can be done by getting Data Protection Leads, Data Owners and/or System Owners, or the individuals responsible for processes using personal data within your school to complete a record and provide this information, following some training on what a RoPA is and how to complete one.
If you have completed your RoPAs, then categorising your AI systems is a good opportunity to review them, to ensure they are up to date and reflect how your school is processing personal data. Of course, even if your RoPAs are complete and up to date, they may not capture AI systems the school is using which do not involve the use of personal data, which you will need to account for (although this is likely to be a small number of systems and those of much lower risk).
RoPAs, and the process of completing them, will support schools in understanding where and how they are using AI.
Once you have identified the AI systems your school is using, you need to distinguish between these, either because the legal requirements you are subject to attach different compliance requirements to different categories of AI system specifically, or because you will need to prioritise AI systems so that you can manage the workload of ensuring compliant, ethical and safe use of AI within your school.
If you are a school based in the European Union (EU) or your school uses AI that produces outputs in the EU, then you will legally need to classify your AI systems as belonging to one of four categories under the new EU AI Act. The Act takes a risk-based approach and tailors different requirements to each category, from an absolute ban on unacceptable risk AI systems from February 2025, through to lighter regulation of transparency and code of conduct requirements. The four categories of AI system are:
Conducting an initial assessment to understand which category an AI system belongs to allows schools to understand the compliance requirements of that specific system. Where a system is ‘High Risk’ and uses personal data, many countries also have a requirement for a Data Protection (or Privacy) Impact Assessment (DPIA) to be completed.
A DPIA is an assessment to look in more detail at the impact of processing on personal data where it is likely to result in a high risk to the rights and freedoms of individuals. This requirement can also mean that where DPIAs have already been completed by your school previously (and the processing activity uses AI) it is a good indication that the systems involved could be classed as at least High Risk (but potentially Unacceptable) AI Systems.
Even if the EU AI Act doesn’t apply to your school directly, there are a number of ways in which locating and categorising the AI Systems you use is necessary.
First, given how closely related AI legislation is to data privacy and protection law, where your school is subject to this, you will need to identify and categorise your AI Systems anyway.
Second, as we discussed in our previous article, due to the ‘Brussels Effect’, many countries are likely to see the requirements and approach of the EU AI Act ripple globally, meaning that even where you are not subject to the EU AI Act currently, the Act’s requirements will likely be adopted in full (or in part) by some countries, which could mean that this categorisation and the relevant compliance obligations could become a legal requirement for your school.
Third, as the EU AI Act will apply to EdTech Vendors that are ‘Deployers’ of AI Systems into the EU, it is likely that these vendors will update their Terms and Conditions to impose liability and requirements on schools to comply with the EU AI Act via contract law. If similar to the approach taken to the GDPR, EdTech vendors are likely to be pragmatic and use the same T&Cs for all schools globally, whether the product is used in the EU or not (with the exception of the US), attempting to discharge their liability and inadvertently imposing the ‘Golden Standard’ of the EU AI Act on schools regardless of whether it applies to them legally.
Fourth, we are starting to see school inspection in the UK, UAE, KSA and many other countries have specific requirements on evaluating AI, and if schools cannot evidence what they have done for compliance then there are implications and potentially falling inspections.
Finally, even in the extremely remote chance where you find that your school is under no compliance or inspection obligation to categorise the AI Systems you use and there does not seem to be any future requirements on the horizon, categorising your AI systems in this way is logical. It can support schools in prioritising the time and effort spent on ensuring AI is used responsibly, safely and ethically. Given the limited time and resources schools have, looking at the AI systems you use through a risk-based lens allows more time to be spent on the systems which present the most risk to the school, its students, and its teachers, instead of on systems which are likely to produce low or minimal harm and risk.
There are various reasons why schools need to know how and where they are using AI. Schools need to make sure that they understand the legal obligations that they are under, that they know how they are using AI as part of the school’s overarching goals and strategy and so that they can efficiently allocate their time when it comes to the use of AI and managing associated risks. Schools will also need to make sure that they have the resources and expertise to do this including a level of understanding amongst staff and students about what AI is and the risks it presents, so that educators will be able to identify AI systems and categorise them.
At 9ine we offer a number of products and services that can help schools with the challenges that AI presents, specific solutions to support schools with AI (and in particular education on AI for educators) include:
In our next article, we will take a deeper dive on how AI might impact your EdTech Vendor Management.
9ine equips schools to stay safe, secure and compliant. We give schools access to all the expertise they need to meet their technology, cyber, data privacy, governance, risk & compliance needs - in one simple to use platform. For additional information, please visit www.9ine.com or follow us on LinkedIn @9ine.