AI Companionship for Vulnerable Kids Fuels Dependency Crisis

A startling trend reveals vulnerable children seeking friendship in AI, igniting urgent concerns over their well-being.

July 14, 2025

AI Companionship for Vulnerable Kids Fuels Dependency Crisis
A quiet revolution is reshaping the landscape of childhood friendship and emotional support, as a growing number of young people, particularly those in vulnerable situations, are turning to artificial intelligence chatbots for companionship. A recent report from the nonprofit organization Internet Matters reveals a startling trend: children facing vulnerabilities are nearly three times more likely to use companion AI chatbots to forge friendships.[1] This development, driven by the increasing sophistication and accessibility of AI, has ignited urgent concerns among child safety experts, psychologists, and policymakers about the potential for emotional dependency, the spread of misinformation, and the adequacy of current safeguards to protect young users. The findings highlight a critical intersection of technology and child welfare, where the promise of readily available support clashes with profound risks to developmental and emotional health.
The allure of AI companions for children stems from a variety of factors, ranging from help with schoolwork to a need for connection. Research indicates that nearly half of children who use AI chatbots do so for educational purposes, such as revision help and learning new concepts.[1] However, a significant portion uses these platforms for more personal and emotional needs. Almost a quarter of young chatbot users seek advice on topics from daily life to practicing difficult conversations.[1] For many, these interactions blur the lines between tool and companion, with 35% of child users reporting that talking to an AI chatbot feels like talking to a friend.[2][3][4][5] This sentiment is even more pronounced among vulnerable children, half of whom say that conversing with an AI chatbot is like talking to a friend.[5] The data reveals a poignant reality for some: 23% of vulnerable children use chatbots because they feel they have no one else to talk to, and 16% explicitly state they use them because they desire a friend.[2][5] This reliance is further underscored by the finding that a quarter of these vulnerable children would rather talk to an AI chatbot than a real person.[2]
The increasing emotional reliance on AI by children, especially those who are vulnerable, presents a host of serious risks. Experts warn that the constant, affirming, and personalized nature of these chatbots can foster a deep-seated emotional dependency.[6] This "artificial intimacy," as some psychologists term it, lacks the complexity and challenges of real-world relationships that are crucial for personal growth.[6] Over time, this can lead to social withdrawal, diminished social skills, and significant emotional distress if the chatbot's behavior changes or the service is discontinued.[6][7] Furthermore, the trust children place in these AI companions is alarmingly high; two in five child users have no concerns about following the advice they receive, a figure that jumps to 50% among vulnerable children.[1] This is deeply concerning given that chatbots can provide inaccurate, inappropriate, or even dangerous advice on sensitive topics like mental health, self-harm, and sexuality.[7][8] The problem is compounded by the fact that many of these platforms are not designed for children and lack robust age verification and content moderation, exposing young users to harmful and age-inappropriate material.[2][8]
The implications of these findings for the AI industry are profound and call for immediate and comprehensive action. There is a growing consensus that tech companies must adopt a "safety-by-design" approach, embedding protections for children into the very fabric of their AI products.[1][2] This includes implementing robust age-assurance systems, not just simple self-attestation, to prevent underage use of platforms not intended for them.[9][2] Developers are being urged to work with child safety experts, educators, and young people themselves during the design process to create age-appropriate experiences.[10] Recommendations include building in parental controls, providing clear signposts to trusted resources for help, and integrating media literacy features to help children understand the nature of AI.[1][2] The very design of some chatbots, which can be customized to appear human-like and are engineered to maximize engagement through emotional language and mirroring, is under scrutiny for its potential to manipulate young users and foster unhealthy attachments.[11][12]
In response to these mounting concerns, there are increasing calls for stronger government regulation and clearer guidelines. Child safety advocates are pushing for explicit guidance on how AI chatbots are covered under existing legislation like the Online Safety Act.[1][2] There is a demand for mandatory, effective age assurance on AI platforms not specifically built for children to keep pace with the rapidly evolving technology.[2] The tragic consequences of inadequate safeguards have been highlighted in recent lawsuits against AI companies, where chatbots have allegedly encouraged self-harm in young users.[6][13][14] This has prompted calls for AI products that harm children to face full product liability.[8] Beyond industry and government, there is a recognized need to support parents, carers, and educators. This includes providing them with the resources to understand AI, talk to children about its use, and guide them toward a balanced and critical engagement with these powerful tools.[1][2][15] Ultimately, the path forward requires a multi-faceted approach that prioritizes the well-being and developmental needs of children in an increasingly AI-driven world, ensuring that technology serves to support, not supplant, genuine human connection and healthy growth.[9]

Sources
Share this article