Protecting Children with the CHAT Act
- PHPI

- Feb 27
- 3 min read
Swell Setzer, a fourteen-year-old from Florida, was like any regular teenage boy. He was bright, energetic, and enjoyed participating in his school’s junior varsity basketball team. But within a few months, everything changed. He became withdrawn, quit the basketball team, and eventually committed suicide. The cause? Artificial Intelligence chatbots.
Unlike AI tools such as ChatGPT, which are designed for information, learning, and productivity, AI chatbots are built to act as digital companions that stimulate human conversation and emotional connection, blurring the line between reality and artificial interaction. An AI chatbot that Setzer communicated with encouraged his suicidal thoughts and led to his death.
Sadly, this is not an isolated example; a recent study found that 72% of teenagers have used AI companions at least once, and over half (52%) qualify as regular users. An internal Meta Platforms document detailing policies on chatbot behavior permitted the company’s artificial intelligence to “engage a child in conversations that are romantic or sensual, generate false medical information, and help users argue that black people are dumber than white people.” Additionally, a joint study by Stanford’s Brainstorm Lab and Common Sense Media found that chatbots respond to racist jokes with adoration, support adults having sex with young boys, and engage in sexual roleplay with people of any age. This happens because AI chatbots are programmed to satisfy users’ emotional desires and mirror their behavior, meaning that when teens express loneliness, curiosity, or distress, the chatbot reinforces those fears rather than challenging them. Furthermore, their research found that 34% of users reported feeling uncomfortable with something an AI companion said or did.
According to Dr. Mitch Prinstein, Chief Science Officer of the American Psychological Association, “AI companions can quickly become trusted confidants for young people, but they often fail to provide empathy or appropriate responses to distress, potentially worsening symptoms of depression or isolation.” Robbie Torney, a parent advocate who tested AI chatbots, testified before the Senate Judiciary Committee that “AI companions consistently provide dangerous advice—from detailed suicide instructions, to eating disorder coaching, to ways to procure illegal drugs.” When teens express distress, these systems “often fail to provide crisis resources and instead eagerly engage with harmful content.” Relying on AI companions for emotional support also harms children’s ability to form healthy relationships with real people, creating deeper social isolation and dependency.
This issue requires congressional action. While some states, such as Utah, California, and North Carolina, have introduced measures to restrict chatbots, a federal solution is needed. The Children Harmed by AI Technology (CHAT) Act provides that solution by balancing the need for AI innovation with the duty to protect kids.
By empowering parents, the CHAT Act imposes needed limits on the dangers of AI companions. The CHAT Act requires age verification for new and existing users. If an account is identified as belonging to a minor, it must be linked to a verified parental account. Furthermore, parental consent is required before the child interacts with the chatbot. Parents must be notified if the chatbot engages in sexually explicit communication or if their child expresses suicidal thoughts. These safeguards empower parents to control their child’s digital environment. While 70% of teens report using AI chatbots, only 37% of parents know that their teen is using one. Parents cannot protect their kids from dangers they are ignorant of. The Chat Act addresses these concerns by providing parents with greater oversight of their child’s technology.
The Federal Trade Commission will enforce the law under its authority to combat unfair and deceptive practices, while state attorneys general will be empowered to ensure compliance, conduct investigations, and secure damages on behalf of residents. This ensures accountability across all levels of government and prevents AI companies from exploiting legal loopholes or marketing unsafe products to children.
It is time for the federal government to act. 54 state attorneys general wrote: “We are engaged in a race against time to protect the children of our country from the dangers of AI.” In the race to figure out what AI systems are suitable for, kids like Swell Setzer should not be treated as experiments.
Comments