9ine Insights | Latest news from 9ine

AI in education: Finding your AI systems and categorising them for risk

Written by 9ine | Nov 14, 2024 4:05:48 PM

The use of AI is being increasingly regulated (particularly in the education sector), to counter the risks and challenges that the use of AI in education can create. In order to comply with these requirements, schools need to understand where and how they are using AI, so that they can understand which compliance requirements they are subject to. 

This article is part of a series of insights from 9ine exploring the impact of Artificial Intelligence (AI) in education. From the opportunities it can bring, to the potential risks and challenges it creates where not implemented appropriately, this series will bring you up to speed and provide practical tips to remain compliant and ethical when using AI in education. 

In our previous article, we looked at the leadership and oversight of AI, and whether schools need an AI Officer? In this article we take a practical look at the starting point for schools in beginning to look at the compliance and governance of their use of AI, which is asking: Are we using AI? And if so, where and how? 

Why do schools need to know where they are using AI? 

We have previously discussed the opportunities that AI can bring for schools, but given that good strategy, leadership and oversight are key to realising these, it is important that schools have a holistic (as well as detailed) view of where they are currently using AI. Schools need to know where they are using AI, for what purposes and processes, and whether any third party EdTech is being used. Strategically, this allows schools to understand the maturity of their AI strategy and how they compare to other schools when it comes to the use of AI in education. 

Even if a school decides not to capitalise on the benefits for their strategy and competitive edge of understanding where they are using AI (potentially due to time, budget and expertise constraints) it is likely that they are legally required to do so. Wherever a school is based geographically, it is likely to be subject to some form of regulation and compliance requirements when it comes to how they are using AI. These may be new requirements (introduced by AI-specific legislation), existing legal requirements (such as from data protection and privacy laws) or principles that countries have introduced to have some influence and control over how AI is being used. 

Even in the remote chance that there is no AI or privacy and data protection legislation requiring your school to understand where you are using AI, because of the potential harms and risks which the use of AI introduces, schools will still need to understand how and where they are using it. This is due to the responsibility all schools have for safeguarding and child protection, to protect students from harm and abuse, both inside and outside of the school environment by preventing harm, creating a safe environment and identifying and addressing potential risks to children. 

Whether it is to gain a competitive advantage, comply with legal requirements, or their responsibility to safeguard children against harm (or all three), schools need to know where and how they are using AI.

How can I find my AI Systems? 

A good place to start when looking to understand where and how your school is using AI is to examine your Records of Processing Activities (RoPAs), a key compliance requirement of much data protection and privacy legislation globally. RoPAs are usually made up of data maps and an inventory of the processing activities your school is carrying out and provide an overview of what your school is doing with personal data. They should include:

  • The purpose of processing the personal data (which is data related to an identified or identifiable individual), this description may explicitly or implicitly call out that AI is being used
  • Details of who this data is shared with, which is where the RoPAs are likely to identify any third party EdTech vendors who may provide systems which use AI

RoPAs should also include: 

  • What sort of personal data is collected and processed
  • The categories of the data subjects about which the data relates (students, teachers, vendors, visitors, parents etc.) 
  • Retention and deletion timelines 
  • A general description of the technical and organisational security measures used to protect the data 

If you haven’t completed your RoPAs, then the need to categorise your systems is a good nudge to do so. This can be done by getting Data Protection Leads, Data Owners and/or System Owners, or the individuals responsible for processes using personal data within your school to complete a record and provide this information, following some training on what a RoPA is and how to complete one. 

If you have completed your RoPAs, then categorising your AI systems is a good opportunity to review them, to ensure they are up to date and reflect how your school is processing personal data. Of course, even if your RoPAs are complete and up to date, they may not capture AI systems the school is using which do not involve the use of personal data, which you will need to account for (although this is likely to be a small number of systems and those of much lower risk). 

RoPAs, and the process of completing them, will support schools in understanding where and how they are using AI. 

Why do I need to categorise my AI Systems and how? 

Once you have identified the AI systems your school is using, you need to distinguish between these, either because the legal requirements you are subject to attach different compliance requirements to different categories of AI system specifically, or because you will need to prioritise AI systems so that you can manage the workload of ensuring compliant, ethical and safe use of AI within your school. 

If you are a school based in the European Union (EU) or your school uses AI that produces outputs in the EU, then you will legally need to classify your AI systems as belonging to one of four categories under the new EU AI Act. The Act takes a risk-based approach and tailors different requirements to each category, from an absolute ban on unacceptable risk AI systems from February 2025, through to lighter regulation of transparency and code of conduct requirements. The four categories of AI system are: 

  • Unacceptable Risk AI Systems: These are systems and practices where the potential risks of using AI are deemed unacceptable, meaning that these are prohibited. There are specific systems detailed in the Act and the use of AI systems to infer the emotions of individuals in education is specifically called out as one of them.. 
  • High Risk AI Systems: These are AI systems which the EU AI Act specifically lists as having a high risk of causing harm to the health, safety or to the fundamental rights of individuals. Certain AI systems in education are specifically called out as ‘high-risk’ (including those used to determine access or admission to schools; to evaluate learning outcomes and pathways; to assess the appropriate level of education that an individual will receive or will be able to access; and to monitor and detect prohibited behaviours of students in tests). Where a school is using a high-risk AI system, there are a number of specific and significant compliance requirements they have to adhere to.
  • Limited Risk AI Systems: These AI systems have light touch compliance obligations, focusing mainly on transparency and are AI systems where the risk:
    1. Does not meet the threshold for High Risk (or where it is listed in the Act but does not lead to a significant risk of harm in practice because it is not the AI system which materially influences the outcome of the decision-making in the process because it performs such a narrow task in it, or the risks are so limited and not increased through use of AI); but that
    2. The risk cannot said to be low or minimal 
  • Minimal or Low Risk AI Systems: These are all AI systems which do not fall under the above categories due to the low risk of using them (e.g. spam filters and video games) and have no further requirements under the EU AI Act (but must comply with applicable existing legislation)

Conducting an initial assessment to understand which category an AI system belongs to allows schools to understand the compliance requirements of that specific system. Where a system is ‘High Risk’ and uses personal data, many countries also have a requirement for a Data Protection (or Privacy) Impact Assessment (DPIA) to be completed. 

A DPIA is an assessment to look in more detail at the impact of processing on personal data where it is likely to result in a high risk to the rights and freedoms of individuals. This requirement can also mean that where DPIAs have already been completed by your school previously (and the processing activity uses AI) it is a good indication that the systems involved could be classed as at least High Risk (but potentially Unacceptable) AI Systems.

But what if the EU AI Act doesn’t apply to our school?

Even if the EU AI Act doesn’t apply to your school directly, there are a number of ways in which locating and categorising the AI Systems you use is necessary. 

First, given how closely related AI legislation is to data privacy and protection law, where your school is subject to this, you will need to identify and categorise your AI Systems anyway. 

Second, as we discussed in our previous article, due to the ‘Brussels Effect’, many countries are likely to see the requirements and approach of the EU AI Act ripple globally, meaning that even where you are not subject to the EU AI Act currently, the Act’s requirements will likely be adopted in full (or in part) by some countries, which could mean that this categorisation and the relevant compliance obligations could become a legal requirement for your school. 

Third, as the EU AI Act will apply to EdTech Vendors that are ‘Deployers’ of AI Systems into the EU, it is likely that these vendors will update their Terms and Conditions to impose liability and requirements on schools to comply with the EU AI Act via contract law. If similar to the approach taken to the GDPR, EdTech vendors are likely to be pragmatic and use the same T&Cs for all schools globally, whether the product is used in the EU or not (with the exception of the US), attempting to discharge their liability and inadvertently imposing the ‘Golden Standard’ of the EU AI Act on schools regardless of whether it applies to them legally. 

Fourth, we are starting to see  school inspection in the UK, UAE, KSA and many other countries have specific requirements on evaluating AI, and if schools cannot evidence what they have done for compliance then there are implications and potentially falling inspections. 

Finally, even in the extremely remote chance where you find that your school is under no compliance or inspection obligation to categorise the AI Systems you use and there does not seem to be any future requirements on the horizon, categorising your AI systems in this way is logical. It can support schools in prioritising the time and effort spent on ensuring AI is used responsibly, safely and ethically. Given the limited time and resources schools have, looking at the AI systems you use through a risk-based lens allows more time to be spent on the systems which present the most risk to the school, its students, and its teachers, instead of on systems which are likely to produce low or minimal harm and risk. 

What do schools need to do next? 

There are various reasons why schools need to know how and where they are using AI. Schools need to make sure that they understand the legal obligations that they are under, that they know how they are using AI as part of the school’s overarching goals and strategy and so that they can efficiently allocate their time when it comes to the use of AI and managing associated risks. Schools will also need to make sure that they have the resources and expertise to do this including a level of understanding amongst staff and students about what AI is and the risks it presents, so that educators will be able to identify AI systems and categorise them. 

At 9ine we offer a number of products and services that can help schools with the challenges that AI presents, specific solutions to support schools with AI (and in particular education on AI for educators) include: 

  • 9ine Academy LMS: Our AI Pathway is your school's learning partner for AI ethics and governance. With differentiated course levels you can enrol staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. There’s also Specialist courses for AI in Safeguarding, Child Protection and Technology.
  • Application Library: A solution that enables all staff to access a central searchable library of all EdTech in the school. The library contains all information staff need to know about the AI in use (if there is), privacy risks, safeguarding risks and cyber risks. With easy to add ‘How to’ and ‘Help’ guides, Application Library becomes a single, central digital resource. Through implementing Application Library your school will identify duplication in EdTech, reduce contract subscription costs and have a workflow for the request of new EdTech for staff to follow.
  • Vendor Management: Removes the pain, and time, from evaluating and vetting third party vendor contracts, privacy notices, information security policies and other compliance documents. Vendor Management provides a thorough, ‘traffic light’ based approach to inform you of vendor privacy, cyber, AI, and safeguarding risks. Vendor Management supports you to demonstrate to parents, staff and regulators how you effectively evaluate and manage technology you choose to deploy.
  • Privacy Academy: A virtual, in-person 6 month monthly training and professional development to manage privacy law, AI, cybersecurity and safeguarding risks of harm at your school. Following a project based methodology we train and coach you on implementing a Privacy Management Programme, considering AI, cyber and safeguarding risks of harm.

In our next article, we will take a deeper dive on how AI might impact your EdTech Vendor Management. 

9ine company overview

9ine equips schools to stay safe, secure and compliant. We give schools access to all the expertise they need to meet their technology, cyber, data privacy, governance, risk & compliance needs - in one simple to use platform. For additional information, please visit www.9ine.com or follow us on LinkedIn @9ine.