Rapid advances in generative artificial intelligence are ushering in a new era of content creation and service delivery. However, with this great power comes great responsibility. The rise of such technologies has raised serious concerns about content safety, leading regulatory bodies around the world to develop guidelines to ensure the ethical and safe deployment of AI services. Among them, China’s Basic Requirements for Security of Generated AI Services (Basic Requirements), promulgated on March 1, 2024, emerges as a comprehensive framework designed to address these pressing concerns. Masu.
Risk classification
The core requirements take a fine-grained approach to ensuring the safety of generative AI services. These outline a series of obligations for service providers, covering key areas such as data sources, content safety, model security, and safeguards. Specifically, Appendix A categorizes 31 types of safety risks that can arise from AI-generated content. These range from violations of fundamental socialist values and discrimination to commercial violations, violations of legal rights, and failure to address the specific security needs of different types of services.
Double safety evaluation
A crucial aspect of the basic requirements is a strong emphasis on traceability and legality of training data sources. Service providers are required to perform security assessments both before and after collection to ensure that the data used does not contain more than 5% of illegal or harmful information. This dual evaluation mechanism represents a proactive strategy aimed at mitigating the risks associated with biased or harmful AI training datasets.
censorship
The core requirements emphasize the need for content safety by requiring service providers to implement robust filtering mechanisms. These measures include keyword blacklists, classification models, and manual spot checks to proactively filter illegal or inappropriate content from AI-generated output. This is in line with the global movement towards responsible AI development, where creators and providers are responsible for preventing the spread of harmful content.
intellectual property protection
The basic requirements also refer to the protection of intellectual property rights regarding AI-generated content. These provide for the appointment of a dedicated person to oversee intellectual property-related matters and facilitate third party inquiries regarding the use of copyrighted material. This provision is particularly important in the field of AI-generated art and literature, where the distinction between original and derivative works is often blurred.
Privacy opt-out
Additionally, this document introduces the concept of “opt-out” consent for the use of user-generated content in AI training that may touch on privacy issues. Although this approach streamlines the collection of diverse datasets, it raises important questions about the soundness of consent mechanisms and the potential risk of privacy violations. Striking the balance between leveraging user engagement and protecting individual rights remains a complex challenge.
The Core Requirements further recommend that you regularly update your keyword library and test database to continue to adapt to the evolving landscape of AI and Internet governance. This dynamic approach is essential to keep pace with the rapid technological and societal changes that impact how AI interacts with users and the broader community.
For service providers, basic requirements present both challenges and opportunities. On the other hand, it will require the development of advanced content management and data management systems. On the other hand, it can provide a clear roadmap to strengthen the reliability and trustworthiness of AI services, increasing user trust and confidence.
In conclusion, as generative AI continues to permeate various fields, the basic requirements provide a comprehensive blueprint for navigating the complexities of AI content safety. By addressing the root causes of potential harm and providing clear guidelines for service providers, these requirements not only promote the ethical development of AI, but also lead to a safer and more responsible digital ecosystem. pave the way for This initiative reflects growing global awareness of the need for a robust regulatory framework to manage the rapidly evolving AI landscape.
If you require further information, please contact the TMC/IP team.