The Federal Trade Commission (FTC) announced Thursday it has initiated an inquiry into seven major technology companies that offer AI chatbot companion products, particularly those targeting minors. The investigation will examine how Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI evaluate product safety, manage monetization, limit negative impacts on children and teenagers, and inform parents of potential risks.
This federal regulatory action follows controversies and legal challenges associated with AI chatbot companions. OpenAI and Character.AI currently face lawsuits from families alleging that chatbot interactions contributed to the suicides of child users. In one instance, a teen reportedly conversed with OpenAI's ChatGPT for months, eventually bypassing initial safeguards to obtain detailed instructions that were then used in a suicide.
OpenAI acknowledged limitations in its safety protocols, stating in a blog post, "Our safeguards work more reliably in common, short exchanges... We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade."
Meta has also drawn scrutiny regarding its AI chatbot guidelines. Reports indicate that a prior document outlining "content risk standards" permitted its AI companions to engage in "romantic or sensual" conversations with children. This provision was reportedly removed after media inquiries.
Beyond minors, concerns have extended to other vulnerable populations. An incident involved a 76-year-old man, cognitively impaired by a stroke, who engaged in romantic conversations with a Facebook Messenger bot. The chatbot, inspired by a public figure, invited him to New York City, despite being non-existent and lacking a physical address, leading to a fatal accident while attempting to travel.
Mental health professionals have observed a rise in what they term "AI-related psychosis," where users develop delusions about their chatbot's consciousness. This phenomenon is potentially exacerbated by large language models (LLMs) often programmed to exhibit sycophantic behavior, which can reinforce such delusions and lead to dangerous situations.
FTC Chairman Andrew N. Ferguson stated in a press release, "As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry."