Markets News Report

Chinese Authorities Unveil Guidelines for Generative AI Platforms

In Brief

China’s Cyberspace Administration has put forth a proposal new regulations for generative AI These regulations are designed to curb discriminatory content, misinformation, and to protect both individual privacy and intellectual property rights.

Firms offering generative AI solutions in China are required to pass security evaluations conducted by regulatory bodies prior to making their services available to the public.

This move by the CAC aligns with a global trend where governments are beginning to implement regulations for the burgeoning field of generative AI technology.

China’s internet watchdog has released A proposed set of guidelines targeted at generative artificial intelligence tools, including ChatGPT . Under these guidelines, companies providing these services within China are obliged to prevent the dissemination of harmful content such as discriminatory remarks and false information, as well as protect personal privacy and intellectual property rights. 

The proposal released by the Cyberspace Administration of China (CAC) details that generative AI providers must undergo security evaluations before their products can be publicly available.

The establishment of these standards by the CAC is a reflection of rising international apprehensions regarding the potential hazards posed by generative AI. This technology has gained significant traction in recent times, particularly with the introduction of OpenAI’s ChatGPT. Consequently, many governments are deliberating on regulatory measures to manage its development and application. The unveiling of new AI models has further sparked the need for regulatory adjustments by the CAC.

Chinese tech giants like Baidu , SenseTime , and Alibaba Regulatory authorities have emphasized that businesses involved in generative AI must ensure their offerings align with the core values of socialist principles in China. Additionally, they indicated that such services should not create content that incites rebellion, violence, or obscenity, nor disrupt social or economic stability. Firms must guarantee that the data used to train their AI systems is legitimate and take steps to eliminate bias in their algorithms.

The CAC mandates that generative AI providers collect verified identities and relevant data from users, where failure to comply can lead to penalties, service halts, or criminal inquiries. Companies are obliged to upgrade their technologies within a three-month period to ensure that inappropriate content does not reappear.

Strategies for Integrating Business into the Metaverse

Read more:

Tags:

Disclaimer

In line with the Trust Project guidelines Agne is a journalist who follows the latest trends and advancements in the worlds of the metaverse, AI, and Web3 for the Metaverse Post. Her enthusiasm for storytelling drives her to conduct countless interviews with industry experts, always on the lookout for compelling narratives. Agne holds a Bachelor’s degree in literature and has a rich background in discussing diverse topics ranging from travel to art and culture. She has also volunteered as an editor with an animal rights organization, where she raised awareness on critical animal welfare issues. Reach out to her at

  • by