在怀疑的时代,我们依然需要信仰 | 经济学人社论
1
思维导图作者:
Tracy,平平无奇小天才,天选搬砖打工人
2
Leaders | AI voted
社论 | AI来投票
英文部分选自经济学人202300902社论板块
Leaders | AI voted
社论 | AI来投票
How artificial intelligence will affect the elections of 2024
AI如何影响2024年的一系列选举
Disinformation will become easier to produce, but it matters less than you might think
炮制虚假信息会越来越容易,但假信息的影响力可能没你想得那么大
Politics is SUPPOSED to be about persuasion; but it has always been stalked by propaganda. Campaigners dissemble, exaggerate and fib. They transmit lies, ranging from bald-faced to white, through whatever means are available. Anti-vaccine conspiracies were once propagated through pamphlets instead of podcasts. A century before covid-19, anti-maskers in the era of Spanish flu waged a disinformation campaign. They sent fake messages from the surgeon-general via telegram (the wires, not the smartphone app). Because people are not angels, elections have never been free from falsehoods and mistaken beliefs.
政治本应是说服,但却始终充斥着宣传与鼓吹。竞选者们道貌岸然,夸夸其谈,谎话连篇。他们用尽一切手段传播各类——无耻的也好,善意的也罢——谎言。播客出现之前,反疫苗阴谋论就已经在通过小册子传播;新冠疫情爆发的一个世纪前,西班牙大流感时期的口罩反对者就发起过一场虚假信息散播活动。他们假借卫生局长名义,通过电报(有线电报,而非“Telegram”软件)四处散播虚假消息。并非所有人都是善良之辈,选举活动从始至终都笼罩在谎言与谬论的阴霾之下。
But as the world contemplates a series of votes in 2024, something new is causing a lot of worry. In the past, disinformation has always been created by humans. Advances in generative artificial intelligence (AI)—with models that can spit out sophisticated essays and create realistic images from text prompts—make synthetic propaganda possible. The fear is that disinformation campaigns may be supercharged in 2024, just as countries with a collective population of some 4bn—including America, Britain, India, Indonesia, Mexico —prepare to vote. How worried should their citizens be?
然而,当所有人都在思忖着2024年的一系列选举时,一件新生事物引发了诸多忧虑。从前制造虚假信息的都是人类,而如今生成式AI取得了巨大进步,它们可以批量产出精巧的文章,还可以根据文本提示创造出十分逼真的照片,这让AI生成政治宣传材料成为可能。让人忧心忡忡的是,2024年,美国、英国、印度、印尼与墨西哥等国家(人口共计为40亿人)都将迎来大选,届时散播虚假信息的行为可能扎堆出现。这些国家的人们应有几分警惕?
It is important to be precise about what generative-AI tools like ChatGPT do and do not change. Before they came along, disinformation was already a problem in democracies. The corrosive idea that America’s presidential election in 2020 was rigged brought rioters to the Capitol on January 6th—but it was spread by Donald Trump, Republican elites and conservative mass-media outlets using conventional means. Activists for the BJP in India spread rumours via WhatsApp threads. All of this is done without using generative-AI tools.
准确了解ChatGPT等生成式AI工具所能带来的变化十分重要。在它们“横空出世”之前,虚假信息就已是民主国家的一个弊病。 “2020年美国大选被操纵”的念头让一群暴徒在2021年1月6日齐聚国会大厦,大肆破坏。但特朗普、共和党精英人士和保守派大众媒体却是借由传统途径散播这一腐蚀人心的想法。印度人民党(BJP)的激进分子利用WhatsApp四处散播谣言。所有这些都未曾借助生成式AI工具。
What could large-language models change in 2024? One thing is the quantity of disinformation: if the volume of nonsense were multiplied by 1,000 or 100,000, it might persuade people to vote differently. A second concerns quality. Hyper-realistic deepfakes could sway voters before false audio, photos and videos could be debunked. A third is microtargeting. With ai, voters may be inundated with highly personalised propaganda at scale. Networks of propaganda bots could be made harder to detect than existing disinformation efforts are. Voters’ trust in their fellow citizens, which in America has been declining for decades, may well suffer as people began to doubt everything.
大语言模型可能在2024年带来哪些变化呢?首先是虚假信息的数量:如果胡说八道的言论多了一千倍甚至十万倍,投票结果或将受到影响。其次是质量:如果虚假音频、照片和视频不能立刻被甄别出来,选民可能会为效果过于逼真的深度伪造技术(Deepfake)所左右。第三则是精准投放:在人工智能的助推下,高度个性化的政治宣传铺天盖地,选民将会应接不暇。相比于现有的虚假信息传播手段,网络上机器人账号组成的“水军”更难被人察觉。几十年来,美国选民越来越不信任彼此,而如果人们见什么就怀疑什么,信任度可能会再创新低。
注释:
“深度伪造”(Deepfake)是英文deep learning(深度学习)和fake(伪造)的混合词,即利用深度学习算法,实现音视频的模拟和伪造。深度伪造最常见的方式是AI换脸技术,此外还包括语音模拟、人脸合成、视频生成等。
This is worrying, but there are reasons to believe AI is not about to wreck humanity’s 2,500-year-old experiment with democracy. Many people think that others are more gullible than they themselves are. In fact, voters are hard to persuade, especially on salient political issues such as whom they want to be president. (Ask yourself what deepfake would change your choice between Joe Biden and Mr Trump.) The multi-billion-dollar campaign industry in America that uses humans to persuade voters can generate only minute changes in their behaviour.
虽然这令人担忧,但我们也有理由相信,人工智能并不会导致人类2500年的民主试验以失败告终。很多人都觉得他人比自己更容易受骗,而事实上,选民不会那么轻易就被人所动摇,尤其是在重要的政治问题上,比如选谁做总统。(扪心自问一下,如果让你在拜登和特朗普之间二选一,深度伪造的内容能左右你的选择吗?)体量以十亿美元计的美国政治宣传产业倒是一直用真实的人类来游说选民,但其影响也微乎甚微。
Tools to produce believable fake images and text have existed for decades. Although generative AI might be a labour-saving technology for internet troll farms, it is not clear that effort was the binding constraint in the production of disinformation. New image-generation algorithms are impressive, but without tuning and human judgment they are still prone to produce pictures of people with six fingers on each hand, making the possibility of personalised deepfakes remote for the time being. Even if these AI-augmented tactics were to prove effective, they would soon be adopted by many interested parties: the cumulative effect of these influence operations would be to make social networks even more cacophonous and unusable. It is hard to prove that mistrust translates into a systematic advantage for one party over the other.
几十年前就已经有工具可以伪造出看似可信的图像和文本了。虽然生成式AI可以为互联网的“巨魔农场(troll farms)”节省人力,但限制虚假信息产能的似乎从来不是人力的短缺。新的图像生成算法表现不俗,但如果不对其加以调适并进行人工评判,图像中仍可能出现每只手都有六根手指的人物,所以实现个性化的深度伪造还很遥远。就算这些由人工智能助力的宣传手段确实有效,其他利益相关方也会迅速跟进采用,让本就嘈杂喧闹、土牛石田的社交网络因越来越多的造势活动而每况愈下。相互猜疑是否只会给某一方带来系统性优势,这点很难论说。
注释:
“Troll farms” is often used to refer to online operations that produce large amounts of content, such as fake accounts, comments, and posts, for the purpose of manipulating public opinion or spreading misinformation.
Social-media platforms, where misinformation spreads, and AI firms say they are focused on the risks. OpenAI, the company behind ChatGPT, says it will monitor usage to try to detect political-influence operations. Big-tech platforms, criticised both for propagating disinformation in the 2016 election and taking down too much in 2020, have become better at identifying suspicious accounts (though they have become loth to arbitrate the truthfulness of content generated by real people). Alphabet and Meta ban the use of manipulated media in political advertising and say they are quick to respond to deepfakes. Other companies are trying to craft a technological standard establishing the provenance of real images and videos.
虚假信息传播的温床——社交媒体平台和AI企业都表示正关注这些风险。ChatGPT母公司OpenAI称将监控ChatGPT的使用情况,来尝试侦测政治造势活动。尽管科技巨头公司并不太情愿辨别真人创作内容的真实性,但其鉴别可疑账号的能力已得到提升。此前,这些公司曾因在2016年大选时传播虚假信息和在2020年大选时大量下架内容而成为众矢之的。Alphabet和Meta公司禁止在政治宣传中使用受到操控的媒体,并表示它们能迅速处理深度伪造的内容。其他公司则试图制定一份技术标准,以确定真实图像和视频的出处。
注释:
1.loath/loth (to do sth): unwilling; reluctant不乐意的;不情愿的;勉强的
2.arbitrate (in/on sth): to officially settle an argument or a disagreement between two people or groups仲裁,公断
3.在本文中,"take down"指的是大型科技平台采取措施删除或封禁某些内容或账户,可以理解为“删除”、“下架”或“封禁”等。
4.Political advertising is a form of campaigning that allows candidates to directly convey their message to voters and influence the political debate.
5.deepfakes: a video of a person in which their face or body has been digitally altered so that they appear to be someone else, typically used maliciously or to spread false information深度伪造(AI换脸)
6.craft: to make skillfully精心制作,周密制定
7.establish: to discover or prove the facts of a situation查实,证实
8.provenance: the provenance of something is the place that it comes from or that it originally came from. 起源;来源;出处
Voluntary regulation has limits, however, and the involuntary sort poses risks. Open-source models, like Meta’s Llama, which generates text, and Stable Diffusion, which makes images, can be used without oversight. And not all platforms are created equal—TikTok, the video-sharing social-media company is designed to promote virality from any source, including new accounts. Twitter (which is now called X) cut its oversight team after it was bought by Elon Musk, and the platform is a haven for bots. The agency regulating elections in America is considering a disclosure requirement for campaigns using synthetically generated images. This is sensible, though malicious actors will not comply with it. Some in America are calling for a system of extreme regulation. There, AI algorithms must be registered with a government body. Such heavy-handed control would erode the advantage America has in AI innovation.
然而,自愿监管具有局限性,非自愿监管又会带来风险。开源模型,比如Meta的Llma(生成文本)和Stable Diffusion(生成图片),可以在不受监管的情况下使用。而且并非所有平台都一样。视频分享社交媒体公司TikTok的底层逻辑就是要让任何来源(包括新账号)的内容都能成为网红。Twitter(现已更名为“X”)被埃隆·马斯克收购后削减了监管团队,现在成了机器人账号的安乐窝。负责监管美国选举的机构正考虑,要求使用合成图片的竞选团队披露具体信息。这样做是明智的,尽管有些“坏蛋”并不会遵守这一规定。一些美国人呼吁建立一套严格的监管体系,倘若如此,AI算法则必须由政府机构记录在册。这种严格的管控将削弱美国在AI创新方面的优势。
注释:
virality n.the condition or fact of being rapidly spread or popularized by means of people communicating with each other, especially through the internet
Politics was never pure
政治永远不纯
Technological determinism, which pins all the foibles of people on the tools they use, is tempting. But it is also wrong. Although it is important to be mindful of the potential of generative AI to disrupt democracies, panic is unwarranted. Before the technological advances of the past two years, people were quite capable of transmitting all manner of destructive and terrible ideas to one another. The American presidential campaign of 2024 will be marred by disinformation about the rule of law and the integrity of elections. But its progenitor will not be something newfangled like ChatGPT. It will be Mr Trump.
技术决定论将人类的一切缺点都归咎于他们使用的工具上,这听上去挺不错,但却是错误的。我们要警惕生成式AI扰乱民主国家的可能性,但也无需杞人忧天。在过去两年里各种技术进步出现之前,人们就已经能够熟练地将各种毁灭性和可怕的想法传递给彼此。2024年美国总统竞选将会被有关法治和选举诚信的虚假信息所破坏,但这些虚假信息的源头将不会是ChatGPT这种新鲜玩意儿,而是特朗普。
注释:
1.Technological determinism is a reductionist theory that aims to provide a causative link between technology and a society's nature. It tries to explain as to whom or what could have a controlling power in human affairs. The theory questions the degree to which human thought or action is influenced by technological factors.
2.foible: a habit or characteristic that someone has which is considered rather strange, foolish, or bad but which is also considered unimportant.小怪癖;(性格上无伤大雅的)缺点
3.progenitor: the progenitor of an idea or invention is the person who first thought of it. 创始人;鼻祖
4.newfangled: If someone describes a new idea or a new piece of equipment as new-fangled, they mean that it is too complicated or is unnecessary. (新想法或新设备)太复杂的,不必要的,时髦复杂的
3
观点|评论|思考
Neil,男,外贸民工,经济学人铁粉
对普通大众来说,独立思考能力是欠缺的,所以大多数时候是云云亦云,被带节奏。我们经常在朋友圈收到长辈们发的信息,一看标题党是假新闻,但是他们就是相信,有时候说都说不明白。我们在感叹长辈被洗脑的同时,我们自己也在被资讯所左右。如果身边只有一两个人在抢盐,你或许会说不用这样吧;但要是大多数人都在做同样的事情,还告诉你也要这样,这个时候的心态肯定会发生一些变化,搞不来隔天你就去买盐了。
假新闻是避免不了,有真就有假。谣言越流行,被误导的情况越严重,反之亦然。所以掌握传播途径的一批人的心态就显得特别重要。TE一贯以来对特朗普是排斥,所以我们看见很多文章也是充斥着这种思想。新闻媒体的偏向性将会影响大众的判断。
4
愿景
01 第十七期翻译打卡营
3位剑桥硕士3位博士在读(剑桥,杜伦,港理工)
5位雅思8分(其中两位写作8分,3位写作7.5分)
雅思、学术英语写作,不知如何下笔如有神?
写作精品课带你谋篇布局直播课+批改作文,
带你预习-精读-写作-答疑从输入到输出写出高质量英语作文
点击下图,即可了解写作课详情!
3位一笔,1位二笔
授课内容全部为CATTI实务考试相关
小班直播课,随机批改,有针对性讲解
每位同学拥有一次批改机会
04 早起打卡营
微信扫码关注该文公众号作者