China is planning to curb the use of AI chatbots, with concerns mounting over their role in cases of self-harm and gambling. As per a CNBC report, a set of draft rules on Saturday revealed that the country’s Cyberspace Administration will be specifically targeting ‘human-like interactive AI services.’
The rules will be targeting the following areas:
- limiting the use of AI chatbots when it comes to generating content that encourages suicide and self harm
- giving AI chatbots the option to have a human take over if the user mentions suicide
- limiting chatbots from generating content related to gambling or violence
The update comes at a time when AI chatbot companies in China are filing for IPOs in Hong Kong.
OpenAI’s Sam Altman has noted the issue of suicide with the use of AI chatbots in Sepember, saying in an interview with Tucker Carlson:
“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model. I don’t actually worry about us getting the big moral decisions wrong. Maybe we will get those wrong too,”
The push by China to regulate the sector is part of its strong stance, which it maintains on innovative sectors such as tech, AI, and cryptocurrency.

