Whilst there are many opportunities to be realised from using AI in education, its use also has the potential to create new (and exacerbate existing) risks and challenges that schools face, including to ethics, child safety and protection and the education system as a whole.
This article is part of a series of insights from 9ine exploring the impact of Artificial Intelligence (AI) in education. From the opportunities it can bring, to the potential risks and challenges it creates where not implemented appropriately, this series will bring you up to speed and provide practical tips to remain compliant and ethical when using AI in education.
In the previous article, we explored what AI is, what opportunities it creates for education and how it is currently being used. In this article we take a deeper look at the risks and challenges that schools will be faced with when implementing AI.
Although AI can create a number of opportunities for educational institutes, using AI in education can also create a number of risks and challenges. From risks to individuals, to the integrity of the educational system as a whole. In order to realise the opportunities of AI in education, schools will need to be mindful of these risks when considering whether it is appropriate to use AI, and how.
From the lack of transparency about how personal data is processed using AI, known as the ‘black box’ problem, to the limited rights that individuals have over their data because of this, the use of AI in education creates and increases a number of risks to the privacy of students and teachers. Because AI relies on vast amounts of data to create effective outputs, data minimisation (a key requirement of data protection and privacy laws globally) is often overlooked. This can result in unauthorised processing of personal data, where data that was collected for one purpose is now being used for another in the context of AI. These issues can result in risks to schools of non-compliance and threats to the privacy of educators and students.
Privacy is not the only area of ethics that using AI in education threatens. The lack of transparency in using AI can also result in unjustified actions, which may be taken based on the output from an AI system, the rationale for which may not be understood or explainable. Decisions made by AI in education can then lead to, or perpetuate, societal discrimination because an AI system’s design and functionality often reflects the values of its designer and the data chosen to be inputted. The use of AI can also impact the autonomy of individuals, particularly in the case of personalisation, where information may be filtered before it is presented to the individual thereby reducing their exposure to, and experience of the world.
The rapid adoption of AI also introduces complex cybersecurity risks that traditional practices cannot always sufficiently address and its use in schools is no exception, where funding, resources and expertise may already be stretched. Increasing use of AI expands the attack surface of a school, increasing the potential entry points and vulnerabilities that an attacker can use to compromise a system or network. This expanded surface lowers the barriers to attackers for creating and injecting malware into a school’s system. AI can also enable more sophisticated phishing attacks by profiling individuals, automating impersonation and carrying out Distributed Denial of Service (DDoS) attacks.
In addition to the risks to individuals, the use of AI also creates risks to the functioning and integrity of the educational system as a whole. The increasing use and availability of AI systems in schools can lead to an overreliance on AI and a loss of critical thinking skills. This overreliance may stem from altruistic aims, such as taking advantage of the efficiencies that AI can create, but it can lead to more dangerous results, such as plagiarism and academic dishonesty where students use AI-generated content as their own. Overreliance decreases the ability of students to truly learn, undermining the education system as a whole.
AI can also create and increase a number of risks to child safety, a key responsibility which schools have in protecting children from abuse, neglect and harm. From age inappropriate conversations that may take place between a child and an AI, to decreased interactions with humans and an increasing addiction/dependency on AI. The Council of International Schools have also raised concerns about the risk of AI being used to create ‘deep fakes’, which are images, videos, audio files or GIFs that can be manipulated by a computer to use someone’s face, voice or body without their consent. Deep fakes have already been shown to cause distress and harm to teachers and students.
These examples highlight some of the risks that using AI in education can create and exacerbate, all of which are challenges that schools will need to overcome and implement safeguards against. Other challenges include knowing when a task is appropriate to use AI for and how to remain compliant with regulations on AI. Over the next ten articles, 9ine will take you through more of these topics and questions, and explain how you can practically put safeguards in place. At 9ine we offer a number of products and services that can help schools with the challenges that AI presents, specific solutions to support schools with their leadership and oversight of AI include:
In our next article, in celebration of Cybersecurity Awareness Month, we go into further detail about the challenges and risks to Cybersecurity that AI can create and how to safeguard against them.
9ine equips schools to stay safe, secure and compliant. We give schools access to all the expertise they need to meet their technology, cyber, data privacy, governance, risk & compliance needs - in one simple to use platform. For additional information, please visit www.9ine.com or follow us on LinkedIn @9ine.