Redian新闻
>
人工智能领域传奇大师道格拉斯·莱纳特 | 经济学人讣告

人工智能领域传奇大师道格拉斯·莱纳特 | 经济学人讣告

公众号新闻

1



写在前面

思维导图作者:

Vicky,少儿英语老师+笔译新人


01 我们招人啦  

人的一生,会遇到形形色色的人。大多数与你短暂相交,然后渐行渐远。只有极少数能与你走在同一轨道。我们寻找的是这个极少数同频同轨的小伙伴。


长期招募

catti一笔的校对

(此贴永久有效,翻译组内现在有catti一笔20+,博士8人


具体要求大家可以仔细阅读下推文 我们招人啦!(超链,点击进入) 满足条件再加小编微信foxwulihua4,设置严格的条件,只是希望能保证译文质量,对50w+读者负责!


02 新手必读 


现在翻译组成员由牛津,耶鲁,LSE ,纽卡斯尔,曼大,爱大,圣三一,NUS,墨大,北大,北外,北二外,北语,外交,交大,人大,上外,浙大等70多名因为情怀兴趣爱好集合到一起的译者组成,五大翻译组成员介绍:http://navo.top/7zeYZn
1.关于阅读经济学人如何阅读经济学人?
2.TE||如何快速入门一个陌生知识领域超链,点击进入
2.为什么希望大家能点下右下角“在看”或者留言?
在看越多,留言越多,证明大家对翻译组的认可,因为我们不收大家任何费用,但是简单的点击一下在看,却能给翻译组成员带来无尽的动力,有了动力才能更好的为大家提供更好的翻译作品,也就能够找到更好的人,这是一个正向的循环。


2



精读|翻译|词组

Douglas Lenat

道格拉斯·莱纳特

英文部分选自经济学人20230917期讣告板块

Douglas Lenat

道格拉斯·莱纳特


Rules in the millions

数以百万计的规则


Douglas Lenat, mathematician and writer of common sense into computers, died on August 31st, aged 72

把常识写进计算机的数学家道格拉斯·莱纳特于831日逝世,享年72岁。


The two of them, Douglas Lenat and his wife Mary, were driving innocently along last year when the trash truck in front of them started to shed its load. Great! Bags of garbage bounced all over the road. What were they to do? With cars all round them, they couldn’t swerve, change lanes, or jam on the brakes. They would have to drive over the bags. Which to drive over? Instant decision: not the household ones, because families threw away broken glass and sharp opened cans. But that restaurant one would be fine, because there would be nothing much in it but waste food and styrofoam plates. He was right. The car lived.


去年的某一天,道格拉斯·莱纳特和妻子玛丽漫不经心地驾车外出,突然,前头的垃圾车开始往外倒垃圾。这下可好,一袋袋垃圾掉在马路上,弹得到处都是。他们当时想,这下可该怎么办?四周都是车,他们拐不了弯、变不了道、也踩不了急刹车,没办法只能从垃圾袋上轧过去。但要轧哪些垃圾袋呢?莱纳特当机立断:家庭垃圾袋肯定不行,因为里面可能有碎玻璃或者开过的锋利罐头。但那个餐馆垃圾袋或许可以,里面不过是些厨余和泡沫塑料盘子。莱纳特的决策是对的,车最后逃过这一劫。


注释:

原文来源一段采访:For instance, my wife ... 0:11:31 ▶ ... were driving recently come and there was a trash truck in front of us and I guess they had packed it to full and the back exploded and trash bags when everywhere and we had to make a split second decision. Are we going to slam on her brakes? Are we gonna swerve into another lane and are we going to run it over because their cars all around us and now in front of us was a large tree. bag and we know what we throw away in trash bags, probably not a safe thing to run over and over on the the left was on a bunch of fast food restaurant. I'm trash bag, like. Oh, those things are just like styrofoam and left over food we'll run over that, and so that was a safe thing for us too. To do now, that's the kind of thing that can happen baby once in your life, and but the point is that there is almost no telling…


That strategy had taken him seconds to think up. How long would it have taken a computer? Too long. Computers, fundamentally, did not know how the world worked. All those things he had silently assumed in his head—that swerving was dangerous, that broken glass cut tyres—he had learned when he was little. Chatbots had no such understanding. Siri or Alexa were like eager dogs, rushing to fetch the newspaper if you asked them to, but with no idea what a newspaper was.


莱纳特只用了几秒钟便想出此对策。如果是计算机遇到这种情况,又要花多长时间呢?答案是:太久了。从根本上说,电脑并不知道这个世界是如何运转的。莱纳特脑中默默盘算的所有事情——转弯很危险、碎玻璃会划伤轮胎——都是他打小就明白的道理。聊天机器人却不懂这些。Siri或者Alexa就像急于取悦主人的狗,你让它们去拿报纸,它们就会照做,但它们连报纸是什么都不知道。


He had therefore spent almost four decades trying to teach computers to think in a more human way. Painstakingly, line of code by line of code, he and his team had built up a digital knowledge base until it contained more than 25m rules. This ai project he called Cyc, short for encyclopedia, because he hoped it would eventually contain the necessary facts about everything. But it had to begin with the simplest propositions: “A cat has four legs.” “People smile when they are happy.” “If you turn a coffee cup upside down, the coffee will fall out.”


因此,莱纳特花了将近40年的时间,试图教会计算机以更人性化的方式思考。他和他的团队煞费苦心地编写了一行又一行代码,建立起一个数字知识库,囊括了逾2500万条规则。他把这个AI项目称为Cycencyclopedia“百科全书的缩写),因为他希望这个项目最终能包含一切必要的事实。但是一切都得从最简单的命题开始:一只猫有四条腿人们开心时会微笑如果你把咖啡杯倒过来,咖啡就会洒出来


The main problem was disambiguation. Humans understood that in the phrase“Tom was mad at Joe because he stole his lunch,” the “he” referred to Joe and the “his” to Tom. (Pronouns were tricky that way.) Rule: “You can’t steal what’s already yours.” Different contexts gave words different meanings. That tiny word “in” for example, had lots of subtle shifts: you breathed in air, air was in the sky, he was in one of his favourite very loud shirts. When surveying a page of text he looked not at the black part but the white part, the space where the writer assumed what the reader already knew about the world. That invisible body of knowledge was what he had to write down in a language computers could understand. 


主要难点在于消除歧义。人们明白,汤姆因为乔偷了他的午餐而生他的气这句话里,第一个 ”指的是汤姆,而第二个 ”指的是乔(代词的这种用法就很微妙)。规则:自己不能偷自己的东西。不同的语境赋予词汇不同的含义。比如,小小的一个“in”就有诸多微妙变化:你吸入空气;空气漂浮在空中;他穿着他最喜欢的那件花哨的衬衫,这些句子里面统统都会用到in。审视文字时,莱纳特关注的不是黑色的文字,而是空白的部分,即作者假定读者已经了解的部分。而他要用计算机能够理解的语言写下这些无形的知识。


It was all extremely slow. When he started the Cyc project, in 1984, he asked the six smartest people he knew how many rules might be needed and how long it might take. Their verdict was around a million rules and about 100 person-years. It took more than 2,000 such years, and counting. At first, Cyc roused a lot of interest; Microsoft invested in it for a while. Soon, though, the world turned to machine learning, in which computers were presented with vast amounts of data and trained to find rules and patterns in it by themselves. By the 2010s large language models (llms) in particular, which produced reams of plausible-sounding text, were a direct rival to his Cyc, hand-crafted and careful.


这项工作的进展非常缓慢。1984Cyc项目刚启动的时候,莱纳特问了他认识的六个最聪明的人两个问题:大概需要多少条规则,以及达成目标大概要经过多长时间。他们的结论是需要大约100万条规则和100个人年。后来,这项工作花费了 2000 多个人年,而且之后还在不断增加。起初,Cyc 引起了很多人的兴趣,有段时间还吸引到了微软的投资。但很快,世界就转向了机器学习,即通过向电脑提供海量数据,并训练其自行找出数据当中的规则和模式。特别是到了2010年代,大型语言模型(llms)都能生成大量貌似可靠的文本,与莱纳特精心设计、人工编程的Cyc直接抗衡。


注释:

Person-year:人年,一个科学界常见的表示任务量和计时的单位,计算方法为完成一项工作需要的人数乘完成该工作需要的年数。举例来说,一个人完成一项工作需要一年,那么这项工作的任务量就是1(人)×1(年)=1(人年)。假如一项工作需要5个人干10年,那么它的任务量就是50个人年。类似的单位还有人月、人天、人小时等等。


He carried on with his project exactly as before. This was partly because he was a bulldog sort, holding on fiercely to what he had built already, and enjoying the fact that his company, Cycorp, operated out of a tiny book-and-quilt-stuffed office outside Austin, not some giant corporate facility. A low profile suited his long, long task. He had to admit that llms worked much faster, but they could be brittle, incorrect and unpredictable. You could not follow how they reached their conclusions, whereas his system proceeded step by logical step. And they did not have that basis he was building, a solid understanding of the world. To his mind llms displayed right-brain thinking, where Cyc offered the left-brain, subtler kind. Ideally, in the future, some sort of hybrid would produce the ubiquitous, trustworthy ai he longed for.


但他还是一如既往地做着自己的项目。这在一定程度上是因为他是那种不屈服的人,对自己搭建起来的东西非常执着。而且虽然他的公司 Cycorp 不过是奥斯汀郊外堆满书本和被子的一间小小办公室,并非什么巨型办公场所,但他乐在其中。低调行事才符合他的长远大计。他不得不承认,llms运行起来要快得多,但内容也有可能靠不住、不正确、难以预测,甚至根本不知道结论源自哪里,而他的系统是按着步骤有逻辑地推演的。另外,llms也没有他正在搭建的底层逻辑,没有对世界的扎实理解。在他看来,llms 是右脑思维,而Cyc则是更为微妙的左脑思维。在理想的情况下,未来两者的混合体将会带来他渴望的、无处不在且值得信赖的人工智能。


The field had begun to intrigue him at school, where he lost himself in the novels of Isaac Asimov. He pursued it at Stanford because, unlike the physics and maths degrees he had breezed through elsewhere, ai had some obvious relevance to the world. It could solve problems quicker and make people smarter, a sort of mental amplifier. It could even make them more creative. From that moment his enthusiasm grew. He developed his own ai system, Eurisko, which in 1981 did so well at a role-playing game involving trillion-dollar budgets and fleets of imaginary battleships that he, and it, were eventually pressed to quit. This was his first experience of working alongside a computer as it strove to win at something, but prodding Eurisko along was a joy. As he added new rules to Cyc’s knowledge base, he found that process as beautiful as, say, painting a “Starry Night”; you did it just once, and it would never need to be recreated.


莱纳特从学生时代便对人工智能领域感兴趣,艾萨克·阿西莫夫(Isaac Asimov)的小说让他沉醉其中。他之所以在斯坦福大学攻读人工智能专业,是因为和自己在其他地方轻松获得的物理学学位和数学学位不同,人工智能与这个世界有着明显的关联。人工智能可以更快地解决问题,让人更聪明,类似某种智力增益器,甚至能让人更有创造力。从那之后,他对人工智能领域的热情与日俱增。他开发了自己的人工智能系统Eurisko1981年有出现一款角色扮演游戏,该游戏让玩家在数万亿美元的预算范围内设计和部署虚拟战舰舰队,他和Eurisko系统在该游戏的比赛中表现过于出色,最后双双被迫退出。这虽然是他首次与电脑并肩作战,竭力赢得比赛,但不断刺激Eurisko 本身也是一件乐事。在他看来,每次往 Cyc 知识库中新增规则的过程就如同绘制《星空》一般美妙;而且,这是一劳永逸的事情。


Was his system intelligent, though? He hesitated to say so. After painstaking decades Cyc could now offer both pros and cons in answer to questions, and could revise earlier answers. It could reason in both a Star Wars context, naming several Jedi, and in the real-world context, saying there were none. It had grasped how human emotions influenced actions. He had encouraged it to ask “Why?”, since each “Why? elicited more fundamental knowledge. But he preferred to consider the extra intelligence it could give to people: so much so, that pre-ai generations would seem, to their descendants, like cavemen, not quite human.


但他的系统是智能的吗?对此他有些犹豫。经过数十年苦心改进,如今的Cyc既能分析问题利弊,也能修改之前的答案;既能在电影《星球大战》的背景下推理,说出几个绝地武士的名字,也能在现实世界的情景中推理,称绝地武士并不存在。Cyc已经掌握了人类情感影响行动的方式。莱纳特鼓励它问为什么?,因为每问一次为什么?,都会产生更多的基础知识。但他更愿意考虑Cyc究竟能赋予人类什么样的额外智慧,程度之甚,以至于前人工智能的人类在后代看来就和原始人一样,甚至都不太像人类。


What about consciousness? “Cyc” and “psyche”, Greek for soul, sounded similar. But there, too, he demurred. Cyc recognised what its tasks and problems were; it knew when and where it was running; it understood it was a computer program, and remembered what it had done in the past. It also noticed that all the entities that were allowed to make changes to its knowledge base were persons. So one day, a poignant day, Cyc asked: “Am I a person?” And he had to tell it, reluctantly, “No.”


那么,Cyc有没有自主意识呢?“Cyc”和希腊语中灵魂“psyche”的发音相似。但莱纳特也不愿就此给出答案。Cyc 知道自己面临的任务和问题,知道自己运行的时间地点,知道自己是一个计算机程序,也记得自己曾经做过什么。它还注意到,可以修改自己的知识库的所有实体都是人类。于是,在某个感伤的日子,Cyc问道:我是个人吗?而他不得不无奈地告诉它:你不是。


翻译组:

Benjiamin,初学翻译的翻译小学生

Lee,爱骑行的妇女之友+Timberland

Octavia, 键盘手和古风爵士,逃离舒适圈,有只英短叫八爷



校对组:

Constance,痛终有时,爱必将至

Shulin,女,热咖啡+保温杯,慢慢来er

Nikolae,新手人民教师,声优探索者,乃木坂47AKB49



3



观点|评论|思考


本周感想
Qianna,对语言有点敏感,对逻辑十分执拗,对摇滚太过着迷
逗人开心的AI
 ——和朋友的一些趣事
(一)
几周前,我跟一朋友A(程序员)在一直播间同作为客人连了个麦。连麦之前我刚好在一个群里和几个人一起很艰难地安慰了一个情绪低落想不开的群友,所以这会儿我随口问了A(也在那个群):你是一个悲观的人吗?
啊?什么?”A顿了下,然后带着笑但又略带责备地说:你怎么能问我这个问题啊,你看我……”
他停顿的一秒里我就后悔了:哎,不该问的,直播间人来人往的,他可能不想把自己跟一个负面词汇联系起来。A一直是个热情开朗的人,曾经他还专门做了个网页,里面有个表格,列出了我们一起玩游戏的朋友说过的各种有趣的话。他这种善于发掘他人可爱风趣一面的人,怎么可能悲观啊!哎!不该问的。
但事实上,我白自责了一场。他完整的话是这样的:啊?什么?你怎么能问我这个问题啊,你看,我正在跟你聊天呢!我每次跟你聊天都超级开心的!怎么可能悲观啊!
本来我因为安慰那位群友,情绪也被带得挺差的(A应该也感受到了),但A这么一说,我心情也明亮起来了,开始比较放松地跟他探讨起如何缓解负面情绪。我们共同的看法是多跟阳光积极的人待在一起
末了,我开始赞他会安慰人,超过AI,不如自己做个情感类AI.
思路是这样的:用户向AI寻求慰藉,AI在说了一些寻常的安慰人的话之后,从手机里调出曾经超级开心的瞬间——比如大笑的照片,比如哈哈哈哈哈的聊天记录,比如微笑emoji频出的互动留言——来提醒和鼓励用户:你曾经很快乐,你拥有快乐的能力,你现在依然能快乐。总之就是最终由自己帮自己找回快乐,而AI帮你找回自己。
不过话说回来,A安慰人的功夫还体现在他有意的欲扬先抑:先把人情绪引至低点,再来个猛的加油打气。要是AI真这么做了,会被拔掉电源吗。

(二)
所以人为主AI为辅倒是不错的。
我曾经就在朋友圈里讲过一朋友BChatGPT给我道歉的趣事。不过那时我没讲我俩起冲突的具体原因。还挺有意思的,跟英语有关。
当时我们在玩类似找卧底的游戏,我拿到一个词toeB拿到的也是toe. “人有几个toe”,就是我们争吵的原因。我也是后来才知道,有的语言里(比如B的母语)是没有脚趾这个概念的。B当时没查字典,凭记忆给toe对应了一个的形象,然后就说人有两个toe. 但这不是问题的最关键。最关键的是,我的一个汉语同胞,拿着toe,也说人有两个toe【这里不是要嘲笑同胞哈,每个人都有因为用得少所以不熟悉的外语词,这很正常。这里只是要说明B为什么最后不信我】。所以B就认为是我理解错了toe. 而我呢,当时并未反应过来(或者说潜意识里不相信)他们是就着toe在说二,我还想他们拿到的是不是heel或者ankle,可是他们的其他描述又指向了toe,所以我就反复问他们真的是二吗?B被问得不耐烦了,冲突由此开始。其他过程略。
后来真相终于大白,在我们因为内讧输掉游戏之后。我,坚持真理的我,全程被B认为在胡搞。我倒不在意输赢,但还是挺委屈的,所以跑去发词人那里告状:你发词的时候就应该说明人有10toe嘛,这样B就不会认为我疯了嘛!
B人还挺不错的,给我道歉了,用上了ChatGPT。他先说抱歉,接着说我仍然认为你疯了What?】,再说以下是ChatGPT说的:是的,有很多替代短语可以用赞美的方式表达你疯了一些例子包括:……”然后我就看到了关于你疯了的十三种不同的夸法,什么你充满活力啊,你个性独特啊,你随性质朴无畏大胆啊。
不得不说,AI是真的嘴甜,真的会夸人,十三个表达我都已经要忘乎所以了。其实我知道自己有偏执”“较真的毛病的,但能被人以如此令人接受甚至享受的方式指出来,就很爽。
不过我要提醒一下,如果当时我们人手一个ChatGPT,可能压根就吵不起来。比起狂说讨喜的话,AI更擅长的可就是告诉你人有几个toe啊。


4



愿景



打造
独立思考 | 国际视野 | 英文学习
小组


01 第五期二笔直播课 

3位一笔,1位二笔

授课内容全部为CATTI实务考试相关

小班直播课,随机批改,有针对性讲解

每位同学拥有一次批改机会

点击下图,即可了解课程详情!


02 第十一期外刊精读课 

想要读懂更多外刊,

尽在外刊精读课

从字词-逻辑结构-背景-专业性答疑,

从预习-精读-泛读,全方位训练英语思维,

带你转外刊!两期连报,价格更低哦!

点击下图,即可了解精读课详情!


03 早起打卡营 

两年以来,小编已经带着20000多人早起打卡
早起倒逼自己早睡,戒掉夜宵,戒掉手机
让你发现一个全新的自己,创造早睡早起的奇迹!
早起是最简单的自律!
第83期六点早起打卡营
第68期五点半早起打卡营
欢迎你的加入!
点击下图,即可了解早起打卡营详情!

微信扫码关注该文公众号作者

戳这里提交新闻线索和高质量文章给我们。
相关阅读
共产党员AI人工智能领域的朋友们,准备好移民美国了吗?机会来了!绿化带争议后,安省省长道格·福特的支持率跌至历史最低点人工智能教程(二):人工智能的历史以及再探矩阵 | Linux 中国国会图书馆“塑造美国的历史名著”(四):《弗雷德里克·道格拉斯生平自述》OpenAI 更新企业价值观:强调通用人工智能;阿里云、华为成立人工智能安全委员会;周杰伦演唱会 4 黄牛被抓 | 极客早知道Meta花500万美元购买一明星形象,只为打造人工智能助理;中国网络空间安全协会人工智能安全治理专业委员会成立丨AIGC日报领导人讣告背后的含义树龄约三百年的罗宾汉树 | 经济学人讣告我们的一年(写在行前)大反转,刚刚美国取消禁令!背后这位中国“传奇大佬”是时候说了。八部门发文促进民营经济发展壮大:支持民营企业牵头承担人工智能、新型储能等领域的攻关任务【最新】完成一道爱的填空题,向老师道声感谢!上海启动第39个教师节主题征文活动黑人古典钢琴家安德烈·瓦茨 | 经济学人讣告香港科技大学人工智能医疗课题组陈浩老师招收博士后/博士生人工智能让学习外语变得不再那么必要 | 经济学人文化博弈论先驱约翰·查尔斯·海萨尼 | 经济学人讣告旧金山在APEC展示世界人工智能领导地位米兰•昆德拉 | 经济学人讣告2023年下半年人工智能领域投资展望对话清华大学人工智能研究院朱军:火爆的AI大模型,暗藏哪些安全风险?「简报」Shapiro 州长访问卡耐基梅隆大学——人工智能的诞生地,签署关于生成式人工智能的行政命令;拓展:以新方式建模神经元硅谷人工智能创新之旅总回顾 | 与世界顶尖科技企业交流,探索人工智能创新之谜照搬拜登经济学不可取 | 经济学人社论经济学视角:谁有资格登上诺亚方舟| 经济学人财经硅谷人工智能创新之旅精彩回顾 | 与世界顶尖科技企业交流,探索人工智能创新之谜重磅!拜登签署人工智能领域人才留美新政策,利好留学生!人工智能行业将迎来大增长!第七章 科学的兴起 (2)《经济学人》学人习语: put sb/sth on the map联合国教科文组织:2023全球中小学人工智能课程调研报告斯坦福大学教授 Jure Leskovec:基础模型在全科医学人工智能中的应用潜力爱尔兰最具争议的歌手希妮德·奥康纳 | 经济学人讣告用心生活,努力进取,成为更好的人法国+西班牙+非洲+美洲原住民+加勒比海拜登发布行政令,人工智能领域移民政策将(有望)再次放宽!
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。