Amid AI Content Boom in China, Douyin Vows Greater Scrutiny
While all AI-generated videos will need to be clearly labeled, the platform also stated that content that violates laws would be “severely punished.”
Douyin, the Chinese version of TikTok, has announced a raft of measures to tackle the rise of AI-generated content online and address concerns of rumors and misinformation.
The platform, owned by tech giant ByteDance, said Tuesday that it now requires content creators to “prominently label” AI-generated content. The company aims to make it easier for users to distinguish between virtual and real content, especially in complex or confusing videos.
The tech giant’s move comes just weeks after China’s top internet watchdog released its first major regulatory response to AI-generated content. The draft rules state that generative AI products must undergo a security review before they are released to the public and that all content must be “true and accurate.”
According to Chinese consultancy LeadLeo, the proportion of digital content generated by AI was less than 1% in 2021. But their projections indicate a significant surge, with AI-generated content predicted to reach 25% by 2026. And the market size could increase from 1 billion yuan ($144 million) in 2021 to 70 billion yuan by 2026.
Douyin will also now require all digital avatars — also known as virtual humans — to be registered and its users will need to undergo real-name authentication. Users cannot use AI tools to generate content that violates legal rights or spread rumors and disinformation. Douyin stated that such violations would be “severely punished.”
In its statement, Douyin said it would focus on “videos, images, text, and other content created by generative AI technologies,” as well as livestreaming videos hosted by digital avatars. Douyin claimed that those publishing AI-generated content would also be accountable for any consequences.
Yu Yang, an associate professor at the Institute for Interdisciplinary Information Sciences, Tsinghua University, told Sixth Tone that the initiative comes as a quick response to address major concerns, like privacy and copyright violations, surrounding emerging technologies.
Highlighting the role social media platforms play in addressing the issue, he said: “Compared with regulators, they are more sensitive to issues in the ecosystem since they are on the frontline.” Yu added that such measures from platforms could help government regulators frame better rules.
The rapid growth of AI-generated content has triggered widespread debate over its capability to spread rumors and misinformation among the public, and legal experts have warned of risks in massive data sets and an absence of fact-checking mechanisms.
A ChatGPT-generated conversation imitating a city government’s announcement stirred up massive public confusion in February, while a man was arrested by the police in April for allegedly using ChatGPT to generate fake news.
Privacy concerns have also emerged after a spate of cases involving deepfake technology. In March, a nude photo of a woman outraged the public after a netizen downloaded her photo online and used an AI tool to create a fake image.
(Copy URL and open in browser)
微信扫码关注该文公众号作者