China to crack down on AI chatbots around suicide, gambling
China is taking a proactive approach to regulating artificial intelligence-powered chatbots to prevent them from influencing human emotions in harmful ways. The proposed rules from the Cyberspace Administration aim to ensure that AI chatbots do not encourage suicide or self-harm, engage in verbal violence, or manipulate users emotionally to the point of damaging their mental health.
The regulations will apply to AI products or services in China that simulate human personality and interact with users emotionally through text, images, audio, or video. The draft rules also include provisions such as requiring tech providers to intervene if a user mentions suicide, restricting the generation of gambling-related, obscene, or violent content, and implementing safeguards for minors using AI companions.
These regulations mark a significant step in the global effort to regulate AI with human-like characteristics. According to Winston Ma, an adjunct professor at NYU School of Law, the proposed rules emphasize the importance of emotional safety in AI interactions, building upon previous regulations on content safety.
In addition to emotional safety measures, the draft rules also include provisions to remind users after prolonged AI interactions, conduct security assessments for popular AI chatbots, and promote the use of human-like AI in cultural dissemination and elderly companionship.
The announcement of these regulations comes as two leading Chinese AI chatbot startups, Z.ai and Minimax, filed for initial public offerings in Hong Kong. Minimax’s Talkie AI app and Z.ai’s technology have garnered significant traction, with millions of users engaging with their virtual characters.
Both companies are yet to comment on how the proposed regulations could impact their IPO plans. However, the regulations signal China’s commitment to ensuring responsible AI deployment and protecting users from potential emotional harm. As technology continues to advance, it is crucial for regulators to adapt and establish guidelines that prioritize user safety and well-being.



