ChatGPT大热,基于虚假信息和数据的AI却很可怕......
导读
带有偏见的人工智能
ChatGPT itself admits it's biased. ChatGPT is only as good as the data it trained on. OpenAI has even agreed that ChatGPT may produce harmful instructions.
违反规则创建的虚拟主播
The company now claims users who created these anchors violated its rules. When we ask the company where these users are from and which rules they violated, the company did not respond to our questions.
But on its website, we find they have already worked closely with the BBC and Reuters in creating similar DeepFake anchors. And they claim that they made the world's first DeepFake anchor sports video with Reuters in 2020.
有趣的是,CNN在视频中谈到虚假信息时,声称来自中国的虚拟主播散布虚假信息。但其实同一家公司也在帮助路透社和BBC制作他们的虚拟主播。
AI seems not so great at faking news anchors, but it could be really good at faking social media identities.
The report Unheard Voice said that there were accounts like this linked to the US government propaganda campaigns that use the deceptive tactics to promote pro-West narratives, spending over 5 years. It is a scandalous example of the US government using AI to promote disinformation.
This is not the first time of such situations. Even before AI was called up for disinformation, US spent 500 million dollars on churning out negative news coverage on China in 2022.
China is being used by the media as a convenient smokescreen, to distract people from the real troubles at home.
微信扫码关注该文公众号作者
戳这里提交新闻线索和高质量文章给我们。
来源: qq
点击查看作者最近其他文章