China to crack down on AI chatbots around suicide and gambling


This photo taken on February 2, 2024 shows Lu Yu, head of product management and operations of Wantalk, an artificial intelligence chatbot created by Chinese technology company Baidu, showing the profile of a virtual girlfriend on his phone, at Baidu headquarters in Beijing.

Jade Gao | Afp | Getty Images

BEIJING — China plans to prevent artificial intelligence-based chatbots from influencing human emotions in ways that could lead to suicide or self-harm, according to draft rules released on Saturday.

THE proposed regulation of the Cyberspace Administration targets what it calls “human-like interactive AI services,” according to a CNBC translation of the document in Chinese.

The measures, once finalized, will apply to AI products or services offered to the public in China that simulate human personality and emotionally engage users through text, images, audio or video. The public comment period ends on January 25.

Beijing’s planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics, said Winston Ma, an assistant professor at NYU Law School. The latest proposals come as Chinese companies have rapidly developed AI companions and digital celebrities.

Compared to that of China regulation of generative AI in 2023Ma said this release “highlights a jump from content safety to emotional safety.”

The draft rules propose the following:

  • AI chatbots cannot generate content that encourages suicide or self-harm, or engage in verbal abuse or emotional manipulation that harms users’ mental health.
  • If a user specifically suggests suicide, technology providers should hand the conversation over to a human and immediately contact the user’s guardian or designee.
  • AI chatbots must not generate gambling-related, obscene or violent content.
  • Minors must have their guardian’s consent to use AI for emotional companionship, with limits on time of use.
  • Platforms should be able to determine whether a user is a minor even if they do not disclose their age and, when in doubt, apply settings for minors, while allowing for recourse.
AI empowers cyber threats – and the fight against them: cybersecurity company Blackpanda

Additional provisions would require technology providers to remind users after two hours of continuous interaction with AI and require security assessments for AI chatbots with more than 1 million registered users or more than 100,000 monthly active users.

The document also encourages the use of human-like AI in “cultural dissemination and companionship of the elderly.”

Chinese AI chatbots IPO

Weekly analysis and insights on Asia’s largest economy delivered to your inbox
Subscribe now



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *