Meta Platforms Inc. on Friday previewed new parental control features designed to manage teens' interactions with AI characters across its platforms. The controls, slated for rollout starting next year, will allow parents to block specific AI characters and monitor conversation topics.
Beginning in the coming months, parents will gain the ability to disable all chats with AI characters for their teenage users. This measure will not, however, restrict access to the Meta AI chatbot, the company's general-purpose AI, which Meta states will exclusively engage in age-appropriate content. Parents will also have the option for more granular control, enabling them to turn off interactions with individual AI characters. Additionally, the features will provide parents with information concerning the topics their teens are discussing with both AI characters and the Meta AI chatbot.
The company confirmed that these new controls are scheduled for implementation on Instagram early next year. Their initial availability will be in English across the U.S., U.K., Canada, and Australia. A joint statement from Instagram head Adam Mosseri and newly appointed Meta AI head Alexandr Wang emphasized the company's commitment to providing "helpful tools and resources that make things simpler for them, especially as they think about new technology like AI."
This announcement follows Meta's earlier statement this week that its content and AI experiences for teens will adhere to a PG-13 movie rating standard, designed to avoid sensitive topics such as extreme violence, nudity, and graphic drug use. The company also indicated that currently, teens are restricted to interacting with a limited selection of AI characters that comply with established age-appropriate content guidelines. Prior parental control features include the ability to set time limits on teen engagement with AI characters, and Instagram has previously stated its use of AI to detect and restrict accounts where teens attempt to misrepresent their age.
The introduction of these controls aligns with a broader industry trend, as platforms including OpenAI and YouTube have also recently introduced or updated tools focused on teen safety. These developments occur amid increasing scrutiny over the potential impact of social media on teen mental health and ongoing legal actions against AI companies, which allege their technologies contributed to instances of teen suicides.