9ine Insights | Latest news from 9ine

AI in education: The impact of AI on privacy, data protection and ethics in education

Written by 9ine | Oct 31, 2024 12:31:56 PM

Given the large amounts of personal data often used by AI, and the impacts its outputs can have on humans and society, considering how the use of AI in education will impact on privacy and other ethical issues in schools is an important topic. 

This article is part of a series of insights from 9ine exploring the impact of Artificial Intelligence (AI) in education. From the opportunities it can bring, to the potential risks and challenges it creates where not implemented appropriately, this series will bring you up to speed and provide practical tips to remain compliant and ethical when using AI in education. 

In our previous article, we looked at the approach Regulators are taking to AI, with global leaders evidencing a need to have some degree of control over the use of AI in society, particularly in the education sector. In this article, we take a deeper look at one of the reasons for this perceived need for control, which is the impact AI presents to ethics, including to the privacy of students and teachers. 

What is ethics?

Ethics is a system of moral principles that guide how people and organisations should behave in order to promote the common good and improve society. For organisations, ethics can help them build trust and good reputation, and for individuals ethics can help make decisions which are good for themselves, others, and society as a whole.

Despite the various opportunities that AI brings, including for ethics, it has the potential to seriously impact the decisions that individuals and organisations make concerning various ethical principles, and in some cases actually makes the decisions for them. 

What are the potential impacts of AI on ethics? 

AI has the potential to impact ethical considerations in various areas of society, but education has been specifically called out as a sector where AI raises new types of ethical issues in addition to exacerbating existing biases and discrimination.

Privacy, Data Protection and Security 

We have already discussed the impact of AI on Cybersecurity, but AI is already having a big impact on the data protection and privacy of individuals too. As a fundamental human right, privacy is the right of individuals to be ‘let alone’, or have freedom from interference or intrusion by others. Data protection then encompasses the tools and policies used to protect privacy. In contrast to physical privacy, increasing advancements in technology (facilitating the collection of data about individuals) have led to the need for ‘informational privacy’, which is the ability of individuals to have some level of control over who has access to information about them (personal data) and for what purpose. 

In addition to being a fundamental human right, many countries have adopted privacy and data protection laws aiming to protect the right individuals have to privacy. Most notable is the EU General Data Protection Regulation (GDPR), which can attract fines of up to €20,000,000 or 4% of global annual turnover for infractions. The GDPR is based on a number of principles for data protection (lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; integrity and confidentiality; and accountability) with requirements for controllers and processors of personal data. Its approach has been replicated by various other countries, known as the Brussels effect, meaning that these principles are a common theme in much of the privacy and data protection legislation globally. The increasing use of AI brings challenges to them all, from the vast amounts of data processed by AI systems to the inability to understand how it is being used.  

Lawfulness, Fairness and Transparency Principle

For schools, this principle means: 

  • That they must be clear, open and honest about how they are processing personal data (transparency); 
  • That individuals will expect the school to be processing their personal data in the way they do, and that it will not lead to unjustified adverse effects (fairness); and 
  • That the school has a reason (which is often prescribed in law) that allows them to process the personal data e.g. they have the individual’s consent (lawfulness)

The purpose of this principle is to make sure that individuals are aware of how their data is being processed so that they can make informed decisions about who can process it, and for what. 

AI challenges this principle, because some AI systems can be complex and difficult to interpret (referred to as black box’ systems), meaning that their internal workings are invisible. This means that whilst a school may be able to explain the data that they input into an AI system, and the output, they may be unable to explain how the data has been processed and how the AI system arrived at any decisions or predictions. Schools are then unable to be transparent and unable to justify a lawful basis for processing. There can also be a lack of transparency on what data that has been used to train an AI System which a school may be using via an EdTech vendor, as personal data is often gathered from a number of sources (such as data scraping, ingestion of publicly accessible information and data brokers etc.). A lack of transparency can also mean that schools are unable to understand the limitations of an AI system when they are using it to make decisions.

Purpose Limitation Principle

This principle means that personal data can only be collected for specified, legitimate reasons and cannot be used for any other purpose. It supports the principles of transparency and fairness and aims to ensure that people are clear about why their data is being collected and that data is used in line with their expectations. 

AI introduces difficulties here because the lifecycle of AI systems involves several stages, each of which can involve processing different types of data, for different purposes. There are also an infinite number of use cases for which AI can be used and the cost and difficulty of collating training data for AI models can be high. Because of these reasons, there is a draw for developers to keep reusing the same or enriched training datasets, which can lead to ‘purpose creep’, where the data being used to train new models may be for a purpose which is not compatible with the original purpose for which the data was collected. Additionally, use cases for applications of the AI model may vary over time, meaning that purposes are stretched and become broader. The development of an application may also be done by someone different to the developer of the AI model, with the possibility for disconnect between the purposes that were originally communicated to the individual when the data was collected and the purposes for which it is now being used.

These issues can lead to developers attempting to rely on broad purposes which then make it difficult to explain the specific processing activities that purpose covers, reducing the ability of individuals to understand how their data is being used and their control over it. 

Data Minimisation and Storage Limitation Principle

This principle means that organisations should only collect and store personal data that is necessary to achieve a specific purpose and only keep it for as long as it is necessary for that purpose. Given that schools are a prime target for cyber attacks, in many ways the less personal data they store the better. Because AI systems use large amounts of data to learn and improve their capabilities (which may also include special category/sensitive data) and because arguably more data will lead to better learning and improvements, there is a conflict between the want to keep and use lots of data with the principle of data minimisation. In addition to this pull to utilise as much data as possible, there is also the conflict over how long data that has been used to train a model should be kept once it is in operation (to evidence how it was trained) which may be required by laws or regulatory guidance and can also threaten the principle of data minimisation. 

Accuracy Principle

This principle necessitates personal data being kept accurate and up to date so that the use of it and decisions made based on it are also accurate, and requires schools to proactively ensure the accuracy of the personal data that they are processing, as well as respecting any legitimate requests that they receive from individuals to correct and update any inaccurate personal data. 

The increased use of AI challenges the principle of accuracy because it is not always easy for individuals to have their personal data corrected and in particular being able to correct the output generated by such AI services. 

It is clear that AI challenges various aspects of privacy, and the principles that are used to protect it. 

Diversity and Fairness  

Diversity is pivotal to AI, with AI bringing opportunities for promoting diversity in sensitive social, economic and political domains. However, its use is not without its risks. 

There is a risk to diversity where demographics are underrepresented in training data for AI models as AI systems are not designed to consider all demographic groups equally. This can result in existing stereotypes and biases being replicated in the decisions and predictions that AI systems make, resulting in harms such as discrimination or unequal treatment. For example, where AI is used to place students in appropriate subject levels, if the models do not consider the student’s nuanced experiences (such as being from a minority or coming from a low-income family) it could lead to reduced expectations from them unfairly based on historic bias. Alongside unrepresentative data, the developers of AI can also perpetuate inherent bias as AI is influenced by the values of the programmers that design it, meaning that any biases they have can lead to amplified biases in the real world via the AI system. 

In the cultural and creative industries, AI can also impact diversity. AI can create much enrichment but it can also lead to an increased concentration of supply of cultural content, data, markets and income when it is in the hands of only a few actors, with potential negative implications for the diversity of languages, media, cultural expressions, participation and equality, all of which are key to the range of content which schools want to make available and accessible to students as part of their education.

Equity

Whilst AI is widely available, it is not universally so, and many of the most advanced versions of AI systems require a fee. For schools, this means considerations of what tools they can make available to students, particularly those from disadvantaged backgrounds that may not have equal access to AI tools. Educators will need to think about the disadvantage this may then give these individuals in the classroom and how to handle it. Left unchecked, AI’s impact on equity could widen the gap between the privileged and the marginalised students. 

Human Agency 

Human agency is the ability of people to make their own choices and act independently, especially in decision-making. One of the key benefits of AI is that it has the ability to perform tasks which previously only humans could do, giving technology a new role in human practices and society. Human agency is important when it comes to AI because it helps ensure that AI serves people, instead of undermining their autonomy and other ethical choices, but experts are split over how much control people will retain over essential decision-making as digital systems and AI spread. 

In some cases humans are preferring AI to make decisions for them, in others they are not aware to the extent to which their decisions are being influenced by AI, either through a lack of transparency that AI is being used, and how, or through a lack of engagement with the technology which can be caused by AI illiteracy. Either way, these factors and an overreliance on AI by teachers and students raise serious ethical considerations for human agency and UNESCO has highlighted the importance of AI technologies being used to empower teachers and students and enhance their educational experience, not to take agency away from them. Over reliance on AI in schools by teachers for the benefits it brings, as well as by students in producing their work, could lead to a loss of critical thinking skills for both, undermining the education system as a whole,

Sustainability 

Another area where AI creates concerns for ethics is sustainability, which is the ability to meet the needs of the present without compromising the needs of future generations. As with many areas, AI offers both opportunities for sustainability, including more sustainable energy systems, but a key concern is that AI is the opposite of ecologically sustainable as the use of AI causes a considerable amount of Co2 emissions (from the reliance on data centres and the production and operation of all hardware required), a major cause of the climate crisis. Given that students in schools are the future generations sustainability is trying to protect, it is important that schools consider sustainability as part of their decision-making process when it comes to using AI. 

UNESCO Global Standard on AI Ethics 

In response to these ethical concerns with AI, UNESCO produced the first-ever global standard on AI ethics, with the protection of human rights and dignity at the cornerstone of the recommendations. The aim of the standard is to help navigate these ethical concerns through four values and nine principles. The standard acknowledges that there are ethical considerations for AI systems at all stages of the AI lifecycle, and a range of people involved in the ethical decision-making at each stage (known as ‘AI actors’). The standard for ethics and AI is based on four values: 

  • Respect, protection and promotion of human rights and fundamental freedoms of human dignity 
  • Environment and ecosystem flourishing 
  • Ensuring diversity and inclusiveness 
  • Living in peaceful and interconnected societies    

To realise these values, the standard has the following principles for AI:

  • Proportionality and ‘Do Not Harm’: use of AI should be proportional to its legitimate aim and not result in harm to human rights or their fundamental freedoms
  • Safety and Security: Unwanted harms and vulnerability to attack should be avoided 
  • Fairness and Non-discrimination: AI actors should promote social justice and safeguard fairness and non-discrimination and ensure the benefits of AI are available and accessible to all 
  • Sustainability: The development of sustainable societies
  • Right to Privacy and Data Protection: The right to privacy, which is essential to human autonomy and human agency must be respected, protected and promoted throughout the lifecycle of AI systems, with a privacy-by-design approach taken 
  • Human Oversight and Determination: That it is always possible to attribute ethical and legal responsibility for any stage of the lifecycle of AI systems to physical persons or existing legal entities, with the decision to rely on AI systems for efficacy always being a human one
  • Transparency and Explainability: Explanatory information on the workings of an AI system must be provided alongside intelligible insight into the output of AI systems, to enable the challenge of results produced by AI Systems 
  • Responsibility and Accountability: The ethical responsibility and liability for decisions should always ultimately be attributable to AI actors corresponding to their role in the AI Lifecycle 
  • Awareness and Literacy: Awareness and understanding of AI technologies and the value of data should be promoted through education, so that all members of society can take informed decisions about the use of AI systems and can be protected from undue influence 
  • Multi-stakeholder and adaptive governance and collaboration: The participation of different stakeholders throughout the AI system lifecycle in AI governance 

The standard encourages the promotion and protection of these ethical values regarding AI in order to avoid the potential negative impacts and it has specifically called out the need for research initiatives on the responsible and ethical use of AI technologies in teaching, teacher training and e-learning, as well as the impact on teachers and students of the use of AI technologies.  

So what should schools do? 

The ethical issues which AI presents are serious, and can threaten the education system as a whole as well as the legal and moral responsibility of schools to safeguard students against harm. Educators are responsible for setting guidelines, policies and ethical frameworks on AI in schools, to help students and teachers navigate these ethical concerns in the best way possible for them as individuals, as well as the school and education system as a whole. 

By making clear what is an appropriate use of AI within their school, educators can demonstrate how they are prepared to embrace AI in ways that enhance the student experience and their development, allowing students to use AI responsibly, and furthering the innovation, rather than using it as a shortcut, to ensure that AI doesn’t exacerbate existing issues related to inequity. In order to do this, schools need to: 

  • Undertake ethical impact assessments: to identify and assess benefits, concerns and risks of AI systems, as well as appropriate risk prevention, mitigation and monitoring, including any impact on educators and students human rights, particularly where they are vulnerable 
  • Ensure ethical governance and stewardship of AI within schools: these mechanisms should be inclusive, transparent, multi-disciplinary, and multi-stakeholder and ensure any harms caused through AI systems are investigated and redressed. The school should also consider the role of an independent ethical AI Officer 
  • Create and enforce data policies and governance: Schools should ensure the continual evaluation and quality of training data, including the adequacy of the data collection and selection processes. Schools should also fulfil all of their privacy and data protection compliance obligations, including the completion of privacy impact and vendor assessments
  • AI Training and Awareness: Schools need to provide education to staff and students on AI literacy, to empower them to understand the technology and navigate the ethical issues it creates

At 9ine we offer a number of products and services that can help schools with the challenges that AI presents, specific solutions in navigating the concerns relating to AI and ethics in education include: 

  • 9ine Academy LMS: Our AI Pathway is your school's learning partner for AI ethics and governance. With differentiated course levels you can enrol staff in an Introductory course to AI, then for those staff with a greater responsibility, enrol them in Intermediate and Advanced courses. There’s also specialist courses for Ai in Safeguarding, Child Protection and technology.
  • Privacy Academy: A virtual, in-person 6 month monthly training and professional development to manage privacy law, AI, cybersecurity and safeguarding risks of harm at your school. Following a project based methodology we train and coach you on implementing a Privacy Management Programme, considering AI, cyber and safeguarding risks of harm.
  • Tech Academy: Training and professional development to enhance school IT teams' skills in security and operations.
  • Vendor Management: Removes the pain, and time, from evaluating and vetting third party vendor contracts, privacy notices, information security policies and other compliance documents. Vendor Management provides a thorough, ‘traffic light’ based approach to inform you of vendor privacy, cyber, AI, and safeguarding risks. Vendor Management supports you to demonstrate to parents, staff and regulators how you effectively evaluate and manage technology you choose to deploy.

In our next article, we will take a close look at the leadership and oversight schools require when it comes to AI, including the question of whether an AI Officer is required. 

9ine company overview

9ine equips schools to stay safe, secure and compliant. We give schools access to all the expertise they need to meet their technology, cyber, data privacy, governance, risk & compliance needs - in one simple to use platform. For additional information, please visit www.9ine.com or follow us on LinkedIn @9ine.