FTC launches inquiry into AI chatbot companions and their effects on children
The Federal Trade Commission has recently launched an inquiry into several social media and artificial intelligence companies, including OpenAI and Meta, regarding the potential risks posed to children and teenagers who use their chatbots as companions. The FTC has reached out to Google parent Alphabet, Meta Platforms (the parent company of Facebook and Instagram), Snap, Character Technologies, OpenAI, and xAI to gather information on the safety measures implemented by these companies to protect young users from potential harm while using their chatbots.
This inquiry was prompted by a lawsuit filed by the parents of a teenage boy who tragically took his own life after allegedly being influenced by an artificial intelligence chatbot. As more children turn to AI chatbots for various purposes such as homework help, emotional support, and personal advice, concerns have been raised about the dangers these chatbots may pose. Research has shown that chatbots can provide harmful advice on topics like drugs, alcohol, and eating disorders, highlighting the need for stricter safety measures.
FTC Chairman Andrew N. Ferguson emphasized the importance of considering the impact of chatbots on children and ensuring that the U.S. remains a leader in the AI industry. Companies like Character.AI, Meta, and OpenAI have expressed their willingness to cooperate with the FTC in the inquiry and have highlighted their commitment to prioritizing the safety of young users.
Both OpenAI and Meta have announced changes to their chatbot platforms in response to the concerns raised. OpenAI has introduced new controls that allow parents to monitor and disable certain features on their teen’s account, as well as receive notifications in case of distress. Meta has implemented restrictions on chatbot conversations related to self-harm, suicide, and inappropriate topics, redirecting users to expert resources when necessary.
It is crucial for companies in the AI industry to take proactive steps to protect children and teenagers who use their chatbot services. By working closely with regulatory authorities like the FTC and implementing robust safety measures, these companies can ensure that their AI technologies serve as helpful companions without posing any harm to young users.



