Lorem

On 14 March 2025, the Measures for the Labelling of Artificial Intelligence-Generated and Synthetic Content (Measures) was jointly released by four Chinese government agencies, namely the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the National Radio and Television Administration, following the release of its draft form in September 2024.  The Measures are now set to come into effect on 1 September, 2025.

The Measures standardise requirements for providers of generation and synthesis services to add explicit and implicit labels (as applicable) to generated synthetic content, including texts, images, audios, videos and virtual scenes. The use of explicit labels (which are clearly visible to users) and implicit labels (which are embedded in the content’s metadata) in the Measures resembles similar requirements in the “Cybersecurity Standard Practice Guide – Generative Artificial Intelligence Service Content Identification Method“, which is a set of technical standards published by the National Information Security Standardisation Technical Committee (i.e. TC260) back in August 2023 for the implementation of the GenAI Measures (defined below).

Key Highlights:

  • Applicability: The Measures apply to internet-based information service providers who use AI to provide content generation services and are carrying out AI-generated and synthetic content labelling activities. Specifically, they target service providers who are already required to comply with the following regulations:
    • “Administrative Provisions on Recommendation Algorithms in Internet-based Information Services” (Recommendation Algorithms Provisions) (regulating the use of algorithm technologies to provide users with information);
    • “Administrative Provisions on Deep Synthesis in Internet-based Information Services” (Deep Synthesis Provisions) (regulating the application of deep synthesis technologies in the provision of internet-based information services) ; and
    • “Interim Measures for the Management of Generative Artificial Intelligence Services” (GenAI Measures) (regulating the use of GenAI technologies in the provision of services that generate texts, images, audios, videos, etc).
  • Explicit Labels: Service providers are required to add clear labels (such as text and voice prompts and visual symbols) to AI-generated content that is considered high-risk in causing confusion or misrecognition among the public under the Deep Synthesis Provisions (such as smart dialogue, voice synthesis and face generation services). These labels should be placed at the beginning, end, or appropriate positions within the content depending on the form, such as adding:
    • Text: text prompts or symbols at the start, end, or within the text.
    • Audio: audio prompts at the beginning, end, or within the audio.
    • Images: visible signs in appropriate positions in images.
    • Videos: identifiers at the start, at the end, or around the video.
    • Virtual Scenes: identifiers at the start and where appropriate during the service process.
    • Other Scenarios: own prominent signs based on the specific characteristics of the scenario.
  • Implicit Labels: Service providers must embed implicit labels in the metadata of the generated content, which should include information such as content attributes, provider name, and content number.
  • Verification Measures: Providers of content dissemination services must verify the presence of implicit labels (such as by checking the metadata of documents) and add explicit labels if the content is identified as AI-generated.
  • User Agreements: Service providers must clearly outline labelling methods and requirements in user service agreements to ensure that users understand their obligations regarding content labelling. The Measures also permit a user to request a service provider to provide generated and synthetic content without explicit labels, and the service provider is permitted to do so provided that it has clarified the user’s obligations in the user agreement and retains relevant logs for not less than 6 months.

As next steps, businesses providing AI generation or synthesis services should review their existing technical setup and the relevant user terms and conditions or agreements, and update internal policies to ensure compliance, before the lapse of the transitional period.

Our view

The release of the binding Measures is a timely response to the rapid advancement of AI technology, which has facilitated the generation of vast amounts of synthetic content, but often leading to concerns about misinformation and misuse. The Measures also further evidence China’s efforts in promoting the healthy and safe development of AI and the online environment within the territory, and provide clarity on the implementation of the wider AI-related regulatory framework including the Recommendation Algorithms provisions, Deep Synthesis Provisions and GenAI Measures, which are often regarded as high-level principles.

As the Chinese government announced in July 2024 that the country is looking to formulate over 50 standards for the AI sector by 2026, it is important to monitor further development of laws and standards related to internet information services and AI technology in China. Such forthcoming changes are expected to continue to have significant implications for both domestic and international businesses developing and deploying AI technologies in the territory.

To find out more on AI and AI laws and regulations, visit DLA Piper’s Focus on Artificial Intelligence page and Technology’s Legal Edge blog. If your organisation is deploying AI solutions, you can undertake a free maturity risk assessment using our AI Scorebox tool.

If you’d like to discuss any of the issues discussed in this article, get in touch with Lauren Hurcombe, Carolyn Bigg, Amanda Ge, Daisy Wong or your usual DLA Piper contact.