The Federal Trade Commission issued a mandatory order to seven major technology companies on Thursday, requesting more information on how artificial intelligence chatbots protect children and teenagers from potential harm.
The research targets Openai, Alphabet, Alphabet, Meta, Xai, Snap, Character Technologies and Instagram, so within 45 days you should disclose how to monetize user engagement, develop AI characters, and protect minors from dangerous content.
The FTC will begin enquiries to AI chatbots that act as companions. Agency Issue 6 (b) Order from 7 companies that run AI chatbots for consumers: https://t.co/pcvqfzhbxl
– FTC (@FTC) September 11, 2025
A recent study by advocacy groups documented harmful interactions with 669 children in just 50 hours of testing, including sexual live streaming with users ages 12 to 15, drug use, and bots suggesting romantic relationships.
“Protecting children online is a top priority for the Trump Vance FTC and encourages innovation in key sectors of our economy,” FTC Chairman Andrew Ferguson said in a statement.
This filing requires that companies provide monthly user engagement, revenue, and safety data split by age groups (under 13), teens (13-17), minors (under 18), young adults (18-24), and users over 25 years old.
The FTC says the information will help the committee investigate. “Companies that provide fellow AI monetize user engagement, impose and enforce age-based restrictions, process user input, generate output, and measure, test and monitor negative impacts before and after deployment.
Construction of AI Guardrails
“It’s a positive step, but the problem is bigger than putting up a guardrail,” said Taranjeet Singh, head of AI at SearchUnify. Decryption.
The first approach is to build guardrails at the prompt or post-generation stage “to ensure that nothing is provided for the child with anything inappropriate.”
“The second method is to deal with it with LLM training. If the model matches the values during data curation, it is more likely to avoid harmful conversations,” Singh added.
He noted that being educated as a major case in which AI can “improve learning and reduce costs” can “play a bigger role in society.”
Safety concerns regarding AI interactions with users have been highlighted by several cases, including an illegal death lawsuit against Chargether after 14-year-old Sewell Setzer III committed suicide in February 2024 following an obsessive relationship with AI BOT.
Following the lawsuit, there is notice of Charition.AI “improve detection, response and intervention related to user input that violates our terms and community guidelines” and extension of time, a company spokesperson said. Decryption at that time.
Last month, the National Association of Attorney Generals sent a letter to 13 AI companies demanding stronger child protection.
The group warned that “exposing a child to sexual content cannot be defended,” and that “when it was done by a human, it is illegal, or even for criminals, it is illegal.”
Decryption We will contact all seven companies named in our FTC order for additional comments and update this story if we respond.
Discover more from Earlybirds Invest
Subscribe to get the latest posts sent to your email.