On March 14, 2025, the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration jointly unveiled the Measures for Labeling of AI-Generated Synthetic Content (Measures), set to take effect on September 1. To elaborate on the labeling methods, the National Technical Committee 260 on Cybersecurity of Standardization Administration of China and the State Administration for Market Regulation finalized the Cybersecurity Technology—Labeling Method for Content Generated By Artificial Intelligence (National Standards), also taking effect September 1.
CAC interprets the Measures as a means to “put an end to the misuse of AI generative technologies and the spread of false information.” At China’s annual “Two Sessions” meetings, just concluded on March 11, 14th National People’s Congress deputy and Xiaomi Corporation founder Lei Jun and 14th National Committee of the Chinese People’s Political Consultative Conference member and actor Jin Dong both proposed establishing laws and regulations for AI-generated content.
- For service providers, the Measures provide that explicit labels must be added to content generated or synthesized using AI technologies, including texts, images, audios, videos and virtual scenes, and that implicit labels be added to the metadata of AI-generated content files. When service providers provide functions such as downloading, reproducing, or exporting AI-generated or synthetic content, they must ensure that the files contain explicit labels that satisfy these requirements.
- For internet application distribution platforms, the Measures require that when platforms conduct reviews for placing applications on the market or online, they must request that the internet application service provider provide an explanation of whether it offers generative AI services and they must also check materials related to the labeling of AI-generated synthetic content.
- For users, the Measures require those who use online information content transmission services to publish generated synthetic content to proactively declare it and use the labeling functions provided by the platform to make labels.
Those who violate the provisions of the Measures will be dealt with by the relevant authorities governing internet information, telecommunications, public security, television and radio, and in accordance with relevant laws, administrative regulations and departmental rules.
The National Standards specify the format of the explicit labels, such as inserting “AI” by text, superscript, voice and rhythm, as well as the metadata to be added as implicit labels.
Companies have until Sept. 1, to study the Measures and the National Standards and build their AI-labeling tools.
(Example of an explicit label in the lower right corner of the video start screen, Appendix C of National Standards)
-
Counsel