As artificial intelligence (AI) becomes an increasingly powerful tool in product development, its application in services and products designed for children and young people presents both opportunities and ethical challenges. Children are a unique demographic, particularly vulnerable due to their developmental stage and their limited understanding of data privacy and the complexities of AI. For anyone developing AI products for these age groups, or products which may be used by them, careful consideration of the ethical implications is vital.
AI technologies are being integrated into an array of products and services targeted at children, from educational tools and smart toys to social media platforms and online learning environments. These AI-driven systems can personalise learning experiences, provide real-time feedback, and even serve as virtual companions. Some AI-powered educational platforms use adaptive algorithms to customise lessons based on a child’s learning pace, needs, and preferences, helping students improve academic performance.
And AI is increasingly embedded in interactive toys and AI assistants like Amazon’s Alexa, which can engage children in conversation, answer their questions, and provide entertainment.
Ethical considerations and risks
Any AI products aimed children must be built with careful consideration of ethical concerns such as:
Data privacy and security
AI systems rely on vast amounts of data, often collected through user interactions. When children interact with AI-driven products, their personal data, including behavioural patterns, preferences, and even biometric data, may be collected and processed. This raises significant privacy concerns, especially since children are less likely to understand the implications of sharing their data.
The use of AI in toys and educational tools can expose children to data breaches or misuse of their personal information. Smart toys like Hello Barbie have raised alarms over privacy violations, as they recorded conversations and sent data to cloud servers for analysis.
Algorithmic bias and fairness
AI algorithms are often trained on datasets that may reflect societal biases. When these systems are applied to children, the stakes are particularly high. Biased algorithms could inadvertently reinforce stereotypes or marginalise certain groups of children based on race, gender, or socioeconomic status. For example, there is concern that AI-driven educational platforms could create or reinforce disparities in learning opportunities if not carefully designed and monitored.
Transparency and accountability
AI systems often operate as “black boxes,” meaning that their decision-making processes are opaque and difficult to interpret. When children are interacting with AI products, this lack of transparency can lead to trust issues. Parents and guardians may struggle to understand how these systems make decisions, what data they are collecting, and how that data is being used. This lack of clarity can erode trust in AI products aimed at children.
Mental and emotional impact
AI’s influence on children’s mental and emotional development is another critical concern. Social media algorithms, for instance, are designed to maximise engagement, leading to compulsive behaviour, which can negatively impact children’s mental health. AI-driven recommendations can expose children to inappropriate content or foster unhealthy social comparisons, as seen in platforms like TikTok and Instagram.
Three applications of AI for children
AI friends
One notable development in the AI space for children and teenagers is the rise of AI-powered virtual companions like ‘My AI’ on Snapchat. Introduced as a personalised chatbot, My AI interacts with users by answering questions, providing recommendations, and engaging in conversation. While AI friends can foster interaction and simulate friendships, they raise several ethical concerns, especially for younger users.
First, the influence of these AI systems on children’s social development is unclear. Children and teenagers may form emotional attachments to AI friends, potentially distorting their understanding of real-world relationships. There is also the risk that children may divulge sensitive personal information to these AI systems, not fully realising that their conversations could be stored and processed for commercial purposes. Although companies like Snapchat claim to safeguard user data, the potential for misuse or breaches remains. In addition, AI friends like My AI can expose children to inappropriate content, depending on how they are programmed to respond to specific topics.
Anyone developing AI friends for children should implement strong privacy protections and transparency measures, ensuring that AI companions remain safe, limited in scope, and provide clear communication to users about the boundaries of AI-human interaction. And parents and guardians must be given tools to monitor and control their children’s use of AI-powered chatbots.
Generative AI and schoolwork
Gen AI, such as large language models like OpenAI’s ChatGPT, is increasingly being used by children and teenagers for schoolwork and learning activities. These tools can assist in a variety of tasks, such as writing essays, solving complex problems, and providing instant feedback. While the use of generative AI has potential benefits, it also poses significant ethical and educational challenges.
Generative AI can personalise learning and help students better understand complex subjects by offering tailored explanations and examples. For students with learning disabilities or those who struggle with traditional teaching methods, these AI tools can offer alternative ways to engage with educational material. AI tools can also help automate tedious tasks like summarising texts or generating practice questions, enabling students to focus on critical thinking and deeper learning.
However, there are downsides to the use of generative AI in education. One of the primary concerns is deskilling. Students may rely too heavily on AI-generated content, bypassing the learning process by submitting AI-generated essays or homework assignments as their own. This could hinder their ability to develop essential skills like problem-solving, critical thinking, and creativity. Gen AI models may also produce inaccurate or biased information, misleading students if they fail to verify the content. Schools and educators face real challenges in detecting AI-generated work, raising concerns about how to assess students’ true understanding of the material.
AI-enabled mental health support
AI-driven mental health apps and chatbots, such as Woebot and Wysa, have emerged as solutions for mental health support to children and teenagers. These AI counsellors offer real-time conversations, helping young users navigate feelings of anxiety, stress, and depression through cognitive behavioural therapy (CBT) techniques and mindfulness exercises. Given the increasing mental health challenges faced by Gen Z, these AI tools could provide a valuable resource, particularly in situations where access to traditional therapy is limited or costly.
However, the use of AI in this sensitive domain raises critical ethical and practical concerns. First, while AI chatbots can be effective for providing basic emotional support, they lack the depth, empathy, and nuance of human therapists. This could result in an oversimplification of complex mental health issues, with children receiving inadequate or inappropriate guidance. Moreover, AI mental health tools may not be equipped to recognise when a user is in crisis and requires immediate intervention, potentially delaying the necessary care from a human professional. Privacy is another issue: children may share highly sensitive personal information with AI chatbots, and without strong data protection measures, this information could be vulnerable to misuse or breaches.
It is crucial that companies developing such products prioritise safety, ensuring that these tools act as a supplement to human care and that clear boundaries are established between AI assistance and professional intervention. Strong privacy protections must also be a core feature of these products, given the sensitive nature of the data involved.
Global legislation covering AI for children
Several countries and global regions have developed legislation aimed at protecting children in the digital age, particularly with regard to AI.
EU AI Act
The EU AI Act pays explicit attention to children’s rights within the Act and acknowledges children and young people as a category of vulnerable users. Their rights are considered specifically in the context of education and also in considering the potential of psychological harms. However, gaps still remain; notably in the potential harms deepfake technologies could cause children and young people.
General Data Protection Regulation (GDPR)
The GDPR, enacted by the European Union in 2018, contains specific provisions related to children, particularly Article 8, which mandates parental consent for the processing of personal data of children under the age of 16 (or 13 in some EU member states). Companies developing AI products for children must ensure they comply with GDPR’s requirements around data minimisation, transparency, and the right to erasure (the “right to be forgotten”).
Children’s Online Privacy Protection Act (COPPA)
In the US, COPPA governs the collection of personal information from children under the age of 13. It requires parental consent before any data collection can occur, and it mandates strict security measures to protect that data. COPPA applies to websites, online services, and apps that are targeted at children or that knowingly collect data from children. AI-driven products must comply with COPPA’s guidelines, especially in terms of how data is collected, stored, and shared.
California Consumer Privacy Act (CCPA)
While not exclusively focused on children, the CCPA, which went into effect in 2020, provides enhanced privacy rights to all California residents, including minors. It includes provisions that allow individuals, including children, to opt out of the sale of their personal data. The CCPA is important for companies targeting the US market, as California’s legislation is often a model for other states.
United Nations Convention on the Rights of the Child (UNCRC)
The UNCRC recognises the right to privacy for children and the need for special protection in the digital environment. While not legally binding, the UNCRC is influential in shaping national policies and laws related to children’s rights online. It emphasises that AI systems aimed at children should be designed with the best interests of the child in mind, ensuring their privacy, safety, and overall well-being.
AI, when used responsibly, has the potential to enhance educational outcomes, improve mental health support, and provide safe, engaging entertainment for children. AI tutoring platforms have the potential to improve student performance by offering tailored learning experiences that adapt to each child’s needs. AI tools are being used specifically to identify and support children with special educational needs by recognising patterns in their behaviour and learning.
However, there are numerous examples of AI’s misuse in products for children. The CloudPets case is a stark reminder of the data security risks associated with AI-enabled toys. In addition, social media platforms like Instagram have been criticised for their AI algorithms that push content to children, which may negatively affect their mental health by promoting unrealistic beauty standards or harmful content.
For anyone involved in the development of AI products aimed at children, balancing innovation with ethical responsibility must be the primary consideration. Companies must ensure that their AI systems are transparent, secure, and designed with the well-being of children in mind. By not only adhering to global legislation but also implementing rigorous ethical standards, organisations can harness the potential of AI for children and young people while safeguarding their privacy and mental health.