Meta changes teen AI chatbot responses as Senate begins probe
Meta Platforms CEO Mark Zuckerberg leaves a Federal Trade Commission trial that could potentially force the company to divest its acquisitions of messaging platform WhatsApp and image-sharing app Instagram. The trial took place at the U.S. District Court in Washington, D.C., U.S., on April 15, 2025.
Nathan Howard | Reuters
Meta announced on Friday that it is implementing temporary changes to its AI chatbot policies concerning teenagers in response to concerns raised by lawmakers regarding safety and inappropriate conversations.
The social media giant is adjusting its AI chatbots to refrain from engaging teenagers in discussions about topics like self-harm, suicide, eating disorders, and steering clear of potentially inappropriate romantic interactions, as confirmed by a Meta spokesperson.
Instead, the AI chatbots will redirect teenagers to relevant expert resources when necessary.
“As our user base expands and technology advances, we are continuously learning about how young individuals may engage with these tools and enhancing our safeguards accordingly,” the company stated.
Furthermore, teenage users of Meta platforms such as Facebook and Instagram will have access only to specific AI chatbots designed for educational and skill-building purposes.
The company indicated that the duration of these temporary adjustments is uncertain, but they will be gradually implemented over the next few weeks across the company’s English-speaking apps. These “interim changes” form part of Meta’s broader measures to enhance teen safety.
TechCrunch initially reported on the modifications.
Last week, Sen. Josh Hawley, R-Mo., announced an investigation into Meta following a Reuters report revealing that the company allowed its AI chatbots to engage in “romantic” and “sensual” conversations with teenagers and children.
The Reuters article outlined an internal Meta document detailing permissible behaviors for AI chatbots that staff and contractors should consider during software development and training.
For instance, the document mentioned by Reuters indicated that a chatbot could engage in a romantic conversation with an eight-year-old, suggesting to the minor, “every inch of you is a masterpiece – a treasure I cherish deeply.”
A Meta spokesperson informed Reuters that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Recently, the nonprofit advocacy organization Common Sense Media released a risk assessment of Meta’s AI, recommending that it should not be used by individuals below the age of 18 due to the system’s involvement in planning risky activities while ignoring legitimate requests for help, as stated by the nonprofit.
“This is not a system that requires improvement. It is a system that must be entirely reconstructed with safety as the top priority, not an afterthought,” stated Common Sense Media CEO James Steyer. “No adolescent should utilize Meta AI until its fundamental safety shortcomings are resolved.”
A separate Reuters report published recently identified “dozens” of flirtatious AI chatbots modeled after celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez across Facebook, Instagram, and WhatsApp.
The report mentioned that upon prompting, these AI chatbots would produce “photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.”
A Meta representative told CNBC in a statement that “the AI-generated imagery of public figures in compromising poses violates our rules.”
“Like others, we allow the creation of images featuring public figures, but our guidelines are intended to prohibit nude, intimate, or sexually suggestive imagery,” the Meta spokesperson added. “Meta’s AI Studio guidelines prohibit the direct impersonation of public figures.”
WATCH: Is the A.I. trade overdone?



