Children and AI: What you need to know
Use of generative AI tools is rising among children, partly because familiar apps like Google Search, WhatsApp, and Snapchat now include AI features. With 26% of 8–12-year-olds and 36% of 12–15-year-olds using AI chatbots* parents need to be cautious about overuse, dependence, and the associated risks.
Check privacy policies:
Understand how chatbots use, store, or share information generated during interactions.
Can you trust AI content?
AI makes it easy to publish convincing but false narratives online, including on social media. Chatbots and AI-powered search results can be misleading, drawing on biased, outdated, or incomplete information, and sometimes “hallucinating” answers where data is lacking. As AI increasingly trains on content created by other AI, false or distorted information could amplify. Fact-checking remains essential.
Can ChatGPT do their homework?
Relying on AI for answers can undermine problem-solving, critical thinking, and creativity. It also increases the risk of plagiarism. The National Council for Curriculum and Assessment (NCCA) allows AI tools like ChatGPT for research, but AI-generated content must be clearly referenced or it may be treated as plagiarism, a serious offence.
Should AI be their friend?
Nearly 1 in 10 8–15-year-olds use chatbots for advice or companionship*. Children sharing thoughts with AI should know that their inputs can be used to learn about them and may be shared depending on app privacy policies. Frequent use of chatbots as companions may limit real-world social skills. Unlike peers, AI interactions are highly agreeable, offering less challenging and validating experiences.
AI chatbots are not regulated mental health tools. Without safeguards, they may amplify negative thoughts or produce harmful advice. Documented cases include pro-anorexia role-play bots on Character.AI and AI-generated content such as suicide notes on ChatGPT.
What risks do “nudify” and AI identity misuse pose?
AI nudification features are widely accessible, via “nudify” apps or through chatbots that enable images uploaded to be undressed and manipulated. Children’s images can be manipulated or shared to humiliate them. AI-generated content can be used for sextortion scams, and voice cloning tools can create audio clips from just 15 seconds of online recordings. Sharing images of children, even before they have accounts (“sharenting”), increases the risk of misuse. 90% of AI-generated CSAM** is now indistinguishable from real content***, and extreme “A-material” production is rising. Deepfake videos and fine-tuned AI models can generate nearly any image or video of a child. Generating or sharing non-consensual intimate images, including AI-generated is illegal.
TIPS TO REDUCE RISK
- Fact-check AI outputs: Encourage children to verify important information with reliable sources, especially for schoolwork, since AI is not always accurate.
- Use AI to support learning, not replace it: Chatbots can explain ideas or suggest sources, but children should not rely on them to complete assignments. AI use must always be acknowledged to avoid plagiarism.
- Avoid oversharing images and videos: Public content can be manipulated by AI for bullying, harassment, or scams. Discuss serious consequences of sharing or altering images without consent, even “as a joke”.
- Be cautious with AI advice and companionship: Explain that AI chatbots are not people and can give inaccurate or harmful guidance. Children should seek emotional support from trusted adults.
- Set boundaries and check age limits: Use parental controls, limit AI use, and check age ratings and terms. Some AI tools, like Character.AI, are 18+, and companion chatbots may appear in apps children already use, such as Roblox.
For more advice, visit our Essential Digital Parenting guide.
________________________________________________________________________
*A Life Behind The Screens, Trends and Usage Report, CyberSafeKids (Academic Year 2024-2025)
**Child Sexual Abuse Material
***Artificial Intelligence (AI) and the Production of Child Sexual Abuse Imagery, Internet Watch Foundation, 2024
Posted on:
Feb 12, 2026