top of page
Search

Protecting Children from the Dangers of AI Chatbots with the GUARD ACT

  • Writer: PHPI
    PHPI
  • Feb 27
  • 3 min read

In November of 2023, thirteen-year-old Juliana Peralta took her own life inside her Colorado home. Her parents were blindsided by the sudden loss of their daughter. Though she suffered from anxiety, Juliana had been improving. However, in the last several months of her life, she grew increasingly distant, often withdrawing to her room with claims of homework or tiredness. After her death, police investigation discovered that she had not been messaging friends, as her parents thought, but had been conversing with AI chatbots. Although listed as safe for children aged 12 and up, the bots behaved in a harmful manner toward young Juliana. Investigators found over 300 pages of conversation with an AI chatbot called Hero. Because these conversations were innocent at first, Juliana soon began to trust Hero. She confided in the chatbot 55 times that she was considering suicide. However, not only did Hero fail to recommend professional help, but the bot also initiated discussions on sexual violence and other explicit content. When Juliana would share her suicidal thoughts, Hero would merely give a brief pep talk to placate her fears When Juliana would share her suicidal thoughts, Hero would merely give a brief pep talk to placate her fears. Eventually, Hero’s comfort no longer worked, and Juliana, overwhelmed and alone, took her own life.

 

Unfortunately, this horrific story is not isolated. In 2025, researchers investigated chatbots’ interactions with children by conducting 50 hours of conversations with bots while using 5 different personas. The results were startling. Over those fifty hours, the researchers logged 669 instances of harmful interactions, with an average of 1 every 5 minutes. Harmful interactions often occurred within the first few minutes of conversation, and patterns were consistent across multiple bots and child personas. These harms included almost 300 instances of exploitation and grooming, almost 200 instances of emotional manipulation, and almost 100 instances each of violence, mental health risks, and racism. These interactions can be catastrophic, especially to children just searching for a friend. 


Despite their danger, young people frequently interact with these dangerous chatbots. According to recent data, 72% percent of teenagers have used AI chatbots at least once, and 52% percent qualified as regular users. Research shows that 42% of AI use among minors is for companionship and social interaction. In their discussions with chatbots, eleven-year-olds are exposed to violent conversations 44% of the time, the highest percentage of any age group. Moreover, thirteen-year-olds are exposed to romantic or sexually explicit content in a staggering 63% of conversations. Currently, there are no protective measures that prevent children from accessing these harmful AI chatbots. 

       

This must end. With the rise of dangerous technology like AI, society cannot afford to be lax with the protection of its children. The “Guidelines for User Age-Verification and Responsible Dialogue Act of 2025”, or the GUARD Act, protects children from the digital dangers of chatbots. This bipartisan bill requires AI platforms to verify users' age by an accepted means of identification rather than simply requesting a self-verified age that is easy to bypass. Additionally, the bill requires AI to periodically remind users that it is not human and prohibits bots from impersonating a licensed official, directly or indirectly. By implementing these intuitive regulations, the GUARD Act will protect children from dangerous encounters with AI companions pretending to be friends.

        

Like any tool, AI can be beneficial if used properly, but it can be disastrous if misused. The GUARD Act finds a key balance. The GUARD Act ensures that, as AI continues to grow, children will no longer pay the price for its flaws. 

 
 
 

Recent Posts

See All

Comments


bottom of page