For Chinese Students, the New Tactic Against AI Checks: More AI
In recent months, Chinese universities have begun using advanced AI systems to detect machine-generated content in dissertations. In response, students, including those who did not initially use chatbots, are turning to AI tools to ensure their work passes these stringent checks.
Just three days before her deadline, Wen Suyu raced to revise her nearly 30,000-character dissertation. Despite writing most of the content herself, she feared it might still be flagged as AI-generated text.
After eight long hours of editing, the AI detection rate had dropped only slightly, from 17% to 14%. Desperate, she turned to PaperPass, a service that promised to lower AI detection rates, and paid 360 yuan ($50).
“The AI detection rate fell to 7%, but the text did not resemble human language at all,” Wen, a senior at a university in southern China’s Guangdong province, tells Sixth Tone.
Across the country, many students are grappling with the same dilemma as the rapid adoption of AI software meets increasingly stringent university standards. To uphold academic integrity, Chinese universities are now using advanced systems to identify AI-generated content in student dissertations.
The move stems from an updated draft law introduced last August proposing the revoking of students’ diplomas found to have used AI to write their dissertations. Since then, several academic databases have developed AI detection systems in response to growing concerns over academic fraud.
This year alone, at least 15 universities have announced they will evaluate the AI-generated content (AIGC) percentage in students’ dissertations, with some requiring students to disclose any AI usage. The passing threshold varies from less than 20% to 40%, while some schools have not disclosed their standards.
Most use the VIP Paper Check System, a longstanding plagiarism checker used in universities that also added a function to detect AI Ghostwriting last year amid rising concerns about machine-generated content. The system claims to detect text produced by AI models like ChatGPT, Baidu’s Ernie Bot, and iFlytek’s SparkDesk and presents a suspected AIGC rate of mild, moderate, or high.
As detection systems become more sophisticated and the pressure to outsmart these technologies intensifies, many students are turning to online communities for help. On the lifestyle app Xiaohongshu alone, over 10,000 posts share tips on reducing “AI detection rates,” with some students reporting detection rates of over 60%.
Services to lower AI rates, often offered by ghostwriting agencies, are also in high demand on e-commerce platforms like Taobao. Sixth Tone found that a 10,000-character text with 40% AI detection rate often requires 150 yuan to reduce it below 10%, and 180 yuan to bring it under 5%. On social media platforms, netizens use the phrase “defeat magic with magic” to share tips on circumventing AI detection.
AI vs. AI
Though Wen’s university is yet to officially implement an AIGC detection system, the widespread discussions and anxiety on social media left her unsettled.
Fearing a potential spot check, she took matters into her own hands, spending 40 yuan to use the VIP Paper Check System and CNKI, another major academic database offering AIGC detection services. But neither offered options to reduce the AI detection score.
The AI detection for her dissertation read 17% on the VIP Paper Check System, while it was almost 0% on CNKI.
“I think almost everyone uses AI. I use AI to polish my writing. The overall structure, data, and content of the article are all written by me, but sometimes I feel my expression is not ideal, so I use AI to refine it a bit,” admits Wen.
Common AI tools in China include search giant Baidu’s Ernie Bot, Alibaba’s Tongyi Qianwen, Kunlun Tech’s TianGong AI, and Moonshot’s Kimi Chatbot. Wen, for instance, used a combination of Claude — the AI chatbot by Anthropic — Ernie, and Tongyi Qianwen.
“Claude generates more precise language, while the domestic models assist with research and use wording that aligns more closely with Chinese writing,” says Wen.
On the lifestyle app Xiaohongshu, popular posts share methods to disguise writing as non-AI-generated. For instance, AI-generated text often features consistent sentence structures, omits subjects, uses short phrases to connect sentences, and favors colons and numbers to list points.
Social posts about reducing the AI detection rate. From Xiaohongshu
Cao, a senior at a university in the eastern Jiangxi province, received an official notice in May requiring her dissertation to have less than 20% AI-generated content to pass.
While she had no trouble rewriting her own, her roommate’s essay was flagged as 50% AI-generated content on CNKI. With Cao’s help, they managed to reduce the detection rate to almost zero within a fortnight.
The process helped Cao discover useful strategies to reduce the rate. “I think the AI detection algorithm of CNKI starts from the beginning and the end of the sentence, so I reorganize the language to lower the rate. For example, removing words like ‘firstly,’ ‘secondly,’ and ‘on the other hand’ from the beginning of the sentence works,” says Cao, who asked to be identified only by her surname.
But revising the writing style is time-consuming and doesn’t always yield noticeable results. It’s why many students turn to more AI tools to erase traces of machine-generated content in their papers.
Zong Chengqing, a researcher at the Institute of Automation, Chinese Academy of Sciences, tells Sixth Tone that AI-generated text is based on mathematical rules calculated by large models.
By analyzing the words nearby, he explains, the machine can generate text directly. The AI uses a calculation formula to predict what word should follow based on the context and mathematical methods.
“Therefore, we can also use this mathematical rule to determine whether the text complies with a certain model, helping us judge whether it is machine-generated or written by a human. The accuracy should be able to reach over 80%,” Zong says on how machine-generated text is detected.
Ke Chunxiao, deputy general manager of Tongfang Knowledge Network Digital Publishing Technology Co., Ltd., the company behind CNKI, recently told domestic media, “The core of AI-generated content detection relies on large amounts of text and data samples to identify differences between human and AI-generated content, such as average sentence length, vocabulary diversity, and text structure.”
However, AI detection outcomes may vary considerably across different systems. On social media, some students say that AI detection across different platforms can differ by more than 50%.
“This might indicate that the development of different large models is uneven,” says Zong. “Large models often need to accurately identify AI-generated content by analyzing vast amounts of text. However, when dealing with natural language, their inference, generalization ability, and imagination are often much worse than humans, which can lead to errors in detection.”
While he believes that current regulations are not unified and need improvement to better regulate AI in academia, he supports introducing AI detection systems to enhance academic integrity. “I think being a little stricter is particularly beneficial for current students.”
According to Zong, as AI is used to create more content, more software will appear to detect and verify it. He says: “AI detection technology will certainly develop in the future, but it may be hard to achieve 100% accuracy. Just as long as there are missiles in the world, there will be anti-ballistic missile systems.”
(Copy URL and open in browser)
微信扫码关注该文公众号作者