China CAC Issues Draft Rules on Human-Like Interactive AI Services — Targets AI Companions and Emotional Chatbots
China's Cyberspace Administration (CAC) released draft Interim Measures on Human-like Interactive AI Services for public comment (deadline May 6, 2026). The rules target AI systems that simulate human personalities, conduct emotional interactions, or form sustained simulated relationships with users — directly regulating AI companionship apps, customer service chatbots, and emotionally-intelligent AI assistants. Key provisions: (1) Providers must clearly disclose that users are interacting with AI at the start of every session and cannot disguise AI systems as humans; (2) AI services are prohibited from making false promises, developing 'excessive emotional dependence' in users, or engaging users in financial transactions under emotional manipulation; (3) Content must comply with 'core socialist values' and cannot endanger national security or social stability; (4) Age verification and parental controls required for services targeting users under 18; (5) Operators must maintain logs for six months and cooperate with government inspections. The draft rules target a rapidly growing domestic market — China has over 150 companies operating AI companion or emotional-support AI products, with combined daily active users exceeding 50 million. The regulations signal Beijing's intent to shape China's AI social layer around state-sanctioned norms, even as it accelerates AI capabilities investment elsewhere.
Sources
- T3 Mayer Brown (China Advisory) Institutional western
- T3 GeopolitecHs Institutional western