作为谷歌的研究科学家,玛格丽特·米切尔帮助开发电脑,它们能够沟通所看到和理解的事情。她警示,如今我们潜意识地将差距,盲点和偏见编码到人工智能中——我们应该考虑今天创造的技术对未来意味着什么。米切尔说:“我们现在所看到的是人工智能进化过程中的一个快照。如果我们希望人工智能以一种帮助人类的方式发展,那么我们需要定义目标和策略,来开通这这条路径。”
演讲者:玛格丽特·米切尔
演讲题目:如何构建对我们有利的人工智能
I work on helping computers communicate about the world around us.
There are a lot of ways to do this, and I like to focus on helping computers to talk about what they see and understand.是有很多方法可以做到这一点,我喜欢专注于协助电脑去谈论它们看到和理解的内容。
Given a scene like this, a modern computer-vision algorithm can tell you that there's a woman and there's a dog.鉴于这样的情景,一个现代的计算机视觉演算法,可以告诉你,有一个女人,还有一只狗。It can tell you that the woman is smiling.It might even be able to tell you that the dog is incredibly cute.I work on this problem thinking about how humans understand and process the world.The thoughts, memories and stories that a scene like this might evoke for humans.那些思想、记忆和故事,在这样的场景中,可能会唤起人类的注意。All the interconnections of related situations.Maybe you've seen a dog like this one before, or you've spent time running on a beach like this one, and that further evokes thoughts and memories of a past vacation, past times to the beach, times spent running around with other dogs.也许你以前见过这样的狗,或者你曾经花时间,在这样的沙滩上跑步,并进一步唤起过去假期的记忆和想法,以前去海滩的时候,花在与其他狗儿跑来跑去的时间。One of my guiding principles is that by helping computers to understand what it's like to have these experiences, to understand what we share and believe and feel, then we're in a great position to start evolving computer technology in a way that's complementary with our own experiences.我的指导原则之一,是通过帮助电脑了解这是什么样的经历,从而了解我们所相信的和感受的共通点,那么我们就有能力开始不断发展计算机技术,以一种与我们经验互补的方式。So, digging more deeply into this, a few years ago I began working on helping computers to generate human-like stories from sequences of images.因此,深入挖掘这一点,我几年前开始致力于从图像序列帮助电脑产生类似人类的故事。So, one day, I was working with my computer to ask it what it thought about a trip to Australia.所以,有一天,我正在用电脑工作时,询问它对澳大利亚之行的看法。It took a look at the pictures, and it saw a koala.
It didn't know what the koala was, but it said it thought it was an interesting-looking creature.它不知道树袋熊是什么,但电脑表示它认为树袋熊看起来是很有趣的生物。Then I shared with it a sequence of images about a house burning down.It took a look at the images and it said, "This is an amazing view! This is spectacular!"电脑看了一下图片说:“这是个惊人的景观!这很壮观!”It sent chills down my spine.It saw a horrible, life-changing and life-destroying event and thought it was something positive.电脑看到一个可怕的、改变生活和毁灭生命的事件,并认为这是积极的事情。I realized that it recognized the contrast, the reds, the yellows, and thought it was something worth remarking on positively.我意识到电脑认识到红色和黄色的对比,并认为这是值得积极评价的事情。And part of why it was doing this was because most of the images I had given it were positive images.That's because people tend to share positive images when they talk about their experiences.那是因为人们谈论自己的经历时,倾向于分享积极的图像。When was the last time you saw a selfie at a funeral?I realized that, as I worked on improving AI task by task, dataset by dataset, that I was creating massive gaps, holes and blind spots in what it could understand.我了解到,当我努力在改善人工智能,一个任务一个任务、一个数据集一个数据集地改善,结果我却在它能了解什么上创造出了大量的隔阂、漏洞以及盲点。And while doing so, I was encoding all kinds of biases.Biases that reflect a limited viewpoint, limited to a single dataset biases that can reflect human biases found in the data, such as prejudice and stereotyping.这些偏见反映出受限的观点,受限于单一数据集,这些偏见能反应出在数据中的人类偏见,比如偏袒以及刻板印象。I thought back to the evolution of the technology that brought me to where I was that day how the first color images were calibrated against a white woman's skin, meaning that color photography was biased against black faces.我回头去想一路带我走到那个时点的科技演化,第一批彩色影像如何根据一个白种女子的皮肤来做校准,这表示,彩色照片对于黑皮肤脸孔是有偏见的。And that same bias, that same blind spot continued well into the '90s.同样的偏见,同样的盲点,持续涌入了20世纪90年代。And the same blind spot continues even today in how well we can recognize different people's faces in facial recognition technology.而同样的盲点甚至持续到现今,出现在我们对于不同人的脸部辨识能力中,在人脸辨识技术中。I thought about the state of the art in research today, where we tend to limit our thinking to one dataset and one problem.我思考了现今在研究上发展水平,我们倾向会把我们的思路限制在一个数据集或一个问题上。And that in doing so, we were creating more blind spots and biases that the AI could further amplify.这么做时,我们就会创造出更多盲点和偏见, 它们可能会被人工智能给放大。I realized then that we had to think deeply about how the technology we work on today looks in five years, in 10 years.那时我了解到,我们必须要深入思考我们现今努力发展的科技,在五年、十年后会是什么样子。Humans evolve slowly, with time to correct for issues in the interaction of humans and their environment.人类演进很慢,有时间可以去修正在人类互动以及其环境中的议题。In contrast, artificial intelligence is evolving at an incredibly fast rate.And that means that it really matters that we think about this carefully right now that we reflect on our own blind spots, our own biases, and think about how that's informing the technology we're creating and discuss what the technology of today will mean for tomorrow.那就意味着,很重要的是我们现在要如何仔细思考这件事,我们要反省我们自己的盲点,我们自己的偏见,并想想它们带给我们所创造出的科技什么样的信息,并讨论现今的科技在将来代表的是什么含义。CEOs and scientists have weighed in on what they think the artificial intelligence technology of the future will be.对于未来的人工智能应该是什么样子,首席执行官和科学家的意见是很有份量的。Stephen Hawking warns that "Artificial intelligence could end mankind."Elon Musk warns that it's an existential risk and one of the greatest risks that we face as a civilization.伊隆·马斯克警告过,它是个生存风险,也是我们人类文明所面临最大的风险之一。Bill Gates has made the point, "I don't understand why people aren't more concerned."比尔·盖兹有个论点:“我不了解为什么人们不更关心一点。”But these views -- they're part of the story.The math, the models, the basic building blocks of artificial intelligence are something that we call access and all work with.数学、模型、人工智能的基础材料,是我们所有人都能够取得并使用的。We have open-source tools for machine learning and intelligence that we can contribute to.我们有机器学习和智慧用的开放原始码工具,我们都能对其做出贡献。And beyond that, we can share our experience.We can share our experiences with technology and how it concerns us and how it excites us.分享关于科技、它如何影响我们、它如何让我们兴奋的经验。We can discuss what we love.We can communicate with foresight about the aspects of technology that could be more beneficial or could be more problematic over time.我们能带着远见来交流,谈谈关于科技有哪些面向,随着时间发展可能可以更有帮助,或可能产生问题。If we all focus on opening up the discussion on AI with foresight towards the future, this will help create a general conversation and awareness about what AI is now, what it can become and all the things that we need to do in order to enable that outcome that best suits us.若我们都能把焦点放在开放地带着对未来的远见来讨论人工智能,这就能创造出一般性的谈话和意识,关于人工智能现在是什么样子、它未来可以变成什么样子,以及所有我们需要做的事,以产生出最适合我们的结果。We already see and know this in the technology that we use today.We use smart phones and digital assistants and Roombas.Are they evil? Maybe sometimes. Are they beneficial? Yes, they're that, too.它们邪恶吗?也许有时候。它们有帮助吗?是的,这也是事实。And they're not all the same. And there you already see a light shining on what the future holds.并且它们并非全都一样的。你们已经看到未来可能性的一丝光芒。The future continues on from what we build and create right now.We set into motion that domino effect that carves out AI's evolutionary path.In our time right now, we shape the AI of tomorrow.Technology that immerses us in augmented realities bringing to life past worlds.让我们能沉浸入扩增实境中的科技,让过去的世界又活了过来。Technology that helps people to share their experiences when they have difficulty communicating.Technology built on understanding the streaming visual worlds used as technology for self-driving cars.立基在了解串流视觉世界之上的科技,被用来当作自动驾驶汽车的科技。Technology built on understanding images and generating language, evolving into technology that helps people who are visually impaired be better able to access the visual world.立基在了解图像和产生语言的科技,演进成协助视觉损伤者的科技,让他们更能进入视觉的世界。And we also see how technology can lead to problems.We have technology today that analyzes physical characteristics we're born with such as the color of our skin or the look of our face -- in order to determine whether or not we might be criminals or terrorists.现今,我们有科技能够分析我们天生的身体特征,比如肤色或面部的外观,可以用来判断我们是否有可能是罪犯或恐怖份子。We have technology that crunches through our data, even data relating to our gender or our race, in order to determine whether or not we might get a loan.我们有科技能够分析我们的数据,甚至和我们的性别或种族相关的资料,来决定我们的贷款是否能被核准。All that we see now is a snapshot in the evolution of artificial intelligence.
我们现在所看见的一切,都是人工智能演进的约略写照。Because where we are right now, is within a moment of that evolution.因为我们现在所处的位置,是在那演进的一个时刻当中。That means that what we do now will affect what happens down the line and in the future.那就意味着,我们现在所做的,会影响到后续未来发生的事。If we want AI to evolve in a way that helps humans, then we need to define the goals and strategies that enable that path now.如果我们想让人工智能的演进方式是对人类有帮助的,那么我们现在就得要定义目标和策略,来让那条路成为可能。What I'd like to see is something that fits well with humans, with our culture and with the environment.我想要看见的东西是要能够和人类、我们的文化及我们的环境能非常符合的东西。Technology that aids and assists those of us with neurological conditions or other disabilities in order to make life equally challenging for everyone.这种科技要能够帮助和协助有神经系统疾病或其他残疾者的人,让人生对于每个人的挑战程度是平等的。Technology that works regardless of your demographics or the color of your skin.这种科技的运作不会考虑你的人口统计资料或你的肤色。And so today, what I focus on is the technology for tomorrow and for 10 years from now.AI can turn out in many different ways.But in this case, it isn't a self-driving car without any destination.但在这个情况中,它并不是没有目的地的自动驾驶汽车。This is the car that we are driving. We choose when to speed up and when to slow down.它是我们在开的汽车。我们选择何时要加速何时要减速。We choose if we need to make a turn. We choose what the AI of the future will be.我们选择是否要转弯。我们选择将来的人工智能会是哪一种。There's a vast playing field of all the things that artificial intelligence can become. It will become many things.人工智能能够变成各式各样的东西。它会变成许多东西。And it's up to us now, in order to figure out what we need to put in place to make sure the outcomes of artificial intelligence are the ones that will be better for all of us. Thank you.现在,决定权在我们,我们要想清楚我们得要准备什么,来确保人工智能的结果会是对所有人都更好的结果。谢谢Remark:一切权益归TED所有,更多TED相关信息可至官网www.ted.com查询!
给你带来最新双语国际趣闻时讯
出国必备英语知识和学习技巧
&育儿心得和留学移民资讯
请长按二维码关注我们!