AI in Education: AI Literacy. What do schools need to know?
The EU AI Act’s first provisions came into effect this month, including the requirement for schools to ensure the appropriate level of AI literacy...
9 min read
9ine
:
Mar 3, 2025 11:23:24 AM
The Virginia legislature has passed a comprehensive AI bill, focused on protecting individuals from harm caused by the use of high-risk AI systems in consequential decisions. If signed into law, schools will need to be compliant with it by 1 July 2026, but what will they need to do?
The Virginia legislature has passed the High-Risk Artificial Intelligence Developer and Deployer Act (Virginia AI Act), a comprehensive AI bill focused on protecting individuals from harm caused by the use of high-risk AI systems in consequential decisions, with a focus on protecting them from algorithmic discrimination. If signed into law by the Governor, Virginia will become the second state (after Colorado) to have a comprehensive state-level AI bill, which would take effect on 1 July 2026. There is a chance that the governor will choose to veto the Bill, given that it passed on tight margins and because of President Trump’s Executive Order on Removing Barriers to American Leadership to Artificial Intelligence, which revoked AI policies and directives that act as barriers to American AI innovation. However, Virginia’s Bill is far more industry-friendly than Colorado’s Anti-Discrimination in AI law (Colorado AI Act) which may improve its chances of being signed. But what would this Act mean for schools if it does come into effect?
The Act applies to deployers and developers of high-risk AI systems that operate in Virginia. A ‘developer’ is anyone that develops, or intentionally and substantially modifies a high-risk AI system, which is made available to deployers or residents in Virginia, regardless of whether there is a cost involved. A ‘deployer’ is anyone that deploys or uses a high-risk AI system that makes a consequential decision in Virginia.
Where the territorial scope of the Act applies, schools are more likely to be considered ‘deployers’, because they often acquire the AI systems they use from third party EdTech vendors. However, if a school develops an AI system, or modifies one that they acquire from an EdTech Vendor substantially, they could also be considered ‘developers’. As the obligations that ‘developers’ and ‘deployers’ are under differ, it is important for a school to clarify what their role in relation to a high-risk AI system classifies them as. For example, if a school developed its own version of Chat-GPT, and integrated that with school systems, they may be classed as a developer as well as a deployer.
The Act does not apply to all ‘AI systems’, but to ‘high-risk AI Systems’ which are AI Systems used to make ‘consequential decisions’ about ‘consumers’. These are all important definitions in understanding how the Act would apply to a school in its current form.
High-risk AI Systems are defined as any AI systems that are specifically intended to automatically make, or be a substantial factor in making, a consequential decision. AI systems will not be considered high-risk if they are:
A consequential decision means that the AI System makes a decision that has ‘a material legal, or similarly significant, effect on the provision or denial to any consumer’ on:
This means that this bill specifically calls out education, and so where schools are using high-risk AI systems to make consequential decisions on enrollment or education opportunities, the Act would apply. Equally, where schools are using high-risk systems to make consequential decisions on access to employment, the Act would also apply.
A ‘consumer’ is defined as a Virginia resident that is acting only in an individual or household context and does not apply to an individual acting in a commercial or employment context. This is an important definition in the context of schools, as it appears that the Act will apply to the use of the school of high-risk AI systems to make consequential decisions about students in general. However, it will only apply to teachers and other staff when the consequential decision concerns their ‘access to employment’, and not decisions made about them once they are employed, which has been largely criticised.
Another definition which is important for understanding when the Act applies is that of ‘algorithmic discrimination’. This is the use of an AI system that results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, sexual orientation, veteran status, or other classification protected under state or federal law. The Act is specifically trying to prevent individuals from this happening in consequential decisions.
If the bill is passed, and a school has established that they are deploying a ‘high-risk AI system’ to make ‘consequential decisions’ about ‘consumers’, they will be subject to a number of requirements. Whilst the requirements for deployers (which schools are most likely to be) and developers differ, schools should be aware of both. This is because schools will need to meet their requirements as a deployer, but to do so, they will need the information which developers will have an obligation to provide them with. Schools will need to be able to verify that the developer has met their compliance requirements and that the school has all the information it needs from them to meet its own requirements as part of their Vendor Management Process.
So what requirements are developers and deployers under?
Developers of high-risk AI systems are required to use a reasonable duty of care to protect consumers from any known, or reasonably foreseeable risks of algorithmic discrimination arising from intended and contracted uses. They are also required to make certain documentation available to deployers, or other developers about the high-risk AI-system, including:
- Its intended uses, purpose and benefits;
- Its known limitations and risks, including of algorithmic decision-making;
- How the system was evaluated for performance and mitigation of risks, including algorithmic decision-making;
- The instructions for use and monitoring the performance of the system;
- A description of the intended outputs of the system; and
- Any additional documentation reasonably necessary to assist in understanding the outputs and monitoring performance of the high-risk system for risks of algorithmic discrimination
This information can be provided through artefacts such as system cards, impact assessments and risk management policies. Similar to the Colorado Act, where developers are in conformity with NIST’s Artificial Intelligence Risk Management Framework Standard under ISO/IEC 42001 or another nationally or internationally recognised risk management system for AI, they will be presumed to be in conformity with these requirements.
Where intentional and substantial modifications made to the AI system require updates to any of this documentation, these will need to be provided to deployers and other developers within 90 days. There are also additional requirements for developers If the high-risk AI system is a ‘high-risk generative AI system’
As deployers, schools will have a number of obligations under the Act, including implementing policies and processes, completing impact assessments, providing transparency and upholding individual rights. Like developers, deployers are under an obligation to use a reasonable duty of care to protect individuals from any known or reasonably foreseeable risks of algorithmic discrimination. As deployers, schools will also have obligations for:
If the developer makes an intentional and substantial modification to the AI system, schools will need to review and update these disclosures where necessary, to make sure that they remain accurate within 30 days.
There is currently no private cause of action under the act, and currently only the Attorney General (AG) would have the power to bring any action. Where the AG has reasonable cause to believe that a school is in violation of the Act, they will be able to issue a civil investigative demand and require information to be disclosed by the school, including their risk management policy and any relevant impact assessments. This could lead to court action, and violations could result in civil penalties of up to $1000.00 plus reasonable attorney fees, expenses and costs as determined by the court and per violation. Where these violations are done so wilfully, they could result in civil penalties of $1000 - $10,000.00 plus reasonable attorney fees, expenses and costs as determined by the court, and per violation,
Prior to bringing an action, the AG will determine, in consultation with the developer or deployer, if it is possible to cure the violation. If it is, the AG may issue a notice of violation and afford the school the opportunity to do so within 45 days of receipt of the notice. In deciding whether to give a school the opportunity, the AG would consider the number of violations; the size and complexity of the developer or deployer; the nature and extent of the developer's or deployer's business; the substantial likelihood of injury to the public; the safety of persons or property; and whether such violation was likely caused by human or technical error.
The Act highlights the importance of a school monitoring its use of an AI, because if a developer or a deployer discovers the violation through red-teaming or other method, rectifies it within 45 days, notifies the AG that it has been rectified, and that any harm has been mitigated so that the system is now in compliance, they can use this as a defence to any action. While these financial penalties are relatively low, the reputational damage could be significant.
It is clear that if this Act gets signed into law, schools will be under a number of obligations where they use high-rIsk AI systems to make consequential decisions about individuals, with a short amount of time to implement them. At 9ine, we will be keeping an eye on the developments of this bill and providing updates.
We offer a number of products and services, which can help your school to meet the requirements of the Act if it comes into effect. Regardless, schools should already be looking at AI governance to make sure that their use of AI meets other legal requirements and obligations they are under. These include for privacy and data protection, as well as for safeguarding and digital safety.
Our Vendor Management module removes the pain, and time, from evaluating and vetting third party vendor contracts, AI policies, privacy notices, information security policies and other compliance documents. It supports schools in strategically overseeing and controlling their relationships with third-party suppliers to ensure that they adhere to stringent AI regulations (which will include this one if it comes into effect) as well as data protection and system security standards.
Vendor Management provides a thorough, ‘traffic light’ based approach to inform you of vendor privacy, cyber, AI, and safeguarding risks. It will be able to support you in identifying where an AI system would be considered a ‘high-risk AI system’ under the proposed Act, meaning that you need to meet the requirements of it. Vendor Management also supports you to demonstrate to parents, staff and regulators how you effectively evaluate and manage technology you choose to deploy.
9ine’s AI Implementation Toolkit simplifies the process of creating policies, processes and assessment templates with our ready to use templates and expert support. A comprehensive toolkit of actions schools need to take to implement AI compliantly, and how they need to integrate these with other areas of compliance, such as safeguarding and data privacy. This can support schools in compliance with the Act when it comes to their risk management and impact assessment processes especially.
9ine Academy LMS is an on-demand training and certification platform which enables schools to enrol individual staff members or entire groups in comprehensive training courses, modules, and assessments, featuring in-built quizzes for knowledge checks. Our AI Pathway is your school's learning partner for AI ethics and governance. With over 20 differentiated course levels you can enrol all staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. There’s also specialist courses for AI in Safeguarding, Child Protection and Technology. This can be used to equip your staff with the knowledge they need at the appropriate level, to meet the requirements of the Act, given the short amount of time you will have to do this.
Schools can also subscribe to learning pathways in Privacy, Cyber, Tech Operations and Risk Management. Alternatively, schools can purchase courses on a per person and a per course basis. We are currently offering free trials for up to three members of a school’s leadership team, so contact us if you would like to take advantage of this, or have any questions on Academy LMS.
A solution that enables all staff to access a central searchable library of all EdTech in the school. The library contains all information staff need to know about the AI in use, including privacy risks, safeguarding risks and cyber risks. With easy to add ‘How to’ and ‘Help’ guides, Application Library becomes a single, central digital resource which can show your school how to use these high-risk AI systems appropriately. Through implementing Application Library your school will identify duplication in EdTech, reduce contract subscription costs and have a workflow for the request of new EdTech for staff to follow.
9ine company overview
9ine equips schools to stay safe, secure and compliant. We give schools access to all the expertise they need to meet their technology, cyber, data privacy, governance, risk & compliance needs - in one simple to use platform. For additional information, please visit www.9ine.com or follow us on LinkedIn @9ine.
The EU AI Act’s first provisions came into effect this month, including the requirement for schools to ensure the appropriate level of AI literacy...
The use of AI is being increasingly regulated (particularly in the education sector), to counter the risks and challenges that the use of AI in...
The impact of Artificial Intelligence on education has been transformative, but what role does the school board play in governance of it? Ahead of...