(单词翻译:单击)
So, artificial intelligence is known for disrupting all kinds of industries. What about ice cream?
人工智能,以能颠覆所有行业广为人知。那冰淇淋呢?
What kind of mind-blowing new flavors could we generate with the power of an advanced artificial intelligence?
我们是否能利用先进的人工智能生成令人震惊的新口味呢?
So I teamed up with a group of coders from Kealing Middle School to find out the answer to this question.
我和Kealing中学的程序员组了个队想要找到答案。
They collected over 1,600 existing ice cream flavors, and together, we fed them to an algorithm to see what it would generate.
他们收集了超过1600种现有的冰淇淋口味,接着我们一起把这些口味输入到算法中看看会有什么结果。
And here are some of the flavors that the AI came up with.
接下来给大家展示一些人工智能所想到的口味。
These flavors are not delicious, as we might have hoped they would be.
这些口味听起来并没有我们想象中美味。
So the question is: What happened? What went wrong? Is the AI trying to kill us?
所以问题来了:怎么回事?到底哪里出问题了?人工智能是想要干掉我们?
Or is it trying to do what we asked, and there was a problem?
还是说它努力想要回应我们的要求,但是却出问题了?
In movies, when something goes wrong with AI,
在电影中,当人工智能出了错,
it's usually because the AI has decided that it doesn't want to obey the humans anymore,
通常是因为它们决定再也不要听从人类的指令,
and it's got its own goals, thank you very much.
它开始有了自己的目标,不劳驾人类了。
In real life, though, the AI that we actually have is not nearly smart enough for that.
然而现实生活中,我们现有的人工智能还没达到那样的水平。
It has the approximate computing power of an earthworm, or maybe at most a single honeybee, and actually, probably maybe less.
它的计算能力大概跟一条小虫子差不多,又或者顶多只是一只小蜜蜂,实际上可能更弱。
Like, we're constantly learning new things about brains that make it clear how much our AIs don't measure up to real brains.
我们持续从大脑学习到新事物,使我们越来越清楚人工智能与真正的大脑之间的距离。
So today's AI can do a task like identify a pedestrian in a picture,
现在人工智能所达到的大体就是在图片中识别出行人的程度,
but it doesn't have a concept of what the pedestrian is beyond that it's a collection of lines and textures and things.
但是它并没有对于行人的概念,除此之外它所做的只是收集线条、质地之类的信息。
It doesn't know what a human actually is. So will today's AI do what we ask it to do?
但是它并不知道人类到底是什么。那么现在的人工智能能否达到我们的要求?
It will if it can, but it might not do what we actually want.
能力允许的情况下它会,但是它所做的可能并不是我们真正想要的。
So let's say that you were trying to get an AI to take this collection of robot parts
假设你想要用人工智能利用一堆机器人的零件,
and assemble them into some kind of robot to get from Point A to Point B.
组装成一个机器人从A点移动到B点。
Now, if you were going to try and solve this problem by writing a traditional-style computer program,
如果你想要通过编写一个传统的计算机程序来解决这个问题,
you would give the program step-by-step instructions on how to take these parts,
你需要输入一步步的指令,指示它怎样拿起零件,
how to assemble them into a robot with legs and then how to use those legs to walk to Point B.
怎样把这些零件安装成一个带脚的机器人,以及如何用脚走到B点。
But when you're using AI to solve the problem, it goes differently.
但是当你利用人工智能来解决这个问题的时候,情况不太一样。
You don't tell it how to solve the problem, you just give it the goal,
你不用告诉它要怎样解决问题,你只需要给它一个目标,
and it has to figure out for itself via trial and error how to reach that goal.
它会通过试错来解决这个问题,来实现目标。
And it turns out that the way AI tends to solve this particular problem is by doing this:
结果是,貌似人工智能在解决这一类问题的时候会这么做:
it assembles itself into a tower and then falls over and lands at Point B.
它把自己搭建成一座塔然后倾倒,最后在B点落下。
And technically, this solves the problem. Technically, it got to Point B.
从技术的层面上看,的确解决了问题。从技术上来说的确到达了B点。
The danger of AI is not that it's going to rebel against us, it's that it's going to do exactly what we ask it to do.
人工智能的危险不在于它会反抗我们,而是它们会严格按照我们的要求去做。
So then the trick of working with AI becomes: How do we set up the problem so that it actually does what we want?
所以和人工智能共事的技巧变成了:我们该如何设置问题才能让它做我们真正想做的事?
So this little robot here is being controlled by an AI.
这一台小机器人由人工智能操控。
The AI came up with a design for the robot legs and then figured out how to use them to get past all these obstacles.
人工智能想到了一个机器人脚部的设计,然后想到了如何利用它们绕过障碍。
But when David Ha set up this experiment,
但是当大卫·哈在做这个实验的时候,
he had to set it up with very, very strict limits on how big the AI was allowed to make the legs, because otherwise ...
他不得不对人工智能容许搭建起来的脚设立非常、非常严格的限制,不然的话...
And technically, it got to the end of that obstacle course.
从技术上说,他的确到达了障碍路线的终点。
So you see how hard it is to get AI to do something as simple as just walk.
现在我们知道了,仅仅是让人工智能实现简单的行走就有多困难。
So seeing the AI do this, you may say, OK, no fair,
当看到人工智能这么做的时候,你可能会说,这不公平,
you can't just be a tall tower and fall over, you have to actually, like, use legs to walk.
你不能只是变成一座塔然后直接倒下,你必须得用脚去走路。
And it turns out, that doesn't always work, either. This AI's job was to move fast.
结果是,那往往也不行。这个人工智能的任务是快速移动。
They didn't tell it that it had to run facing forward or that it couldn't use its arms.
他们没有说它应该面向前方奔跑,也没有说不能使用它的手臂。
So this is what you get when you train AI to move fast, you get things like somersaulting and silly walks.
这就是当你训练人工智能快速移动时所能得到的结果,你能得到的就是像这样的空翻或者滑稽漫步。
It's really common. So is twitching along the floor in a heap.
太常见了。在地板上扭动前进也是一样的结果。
So in my opinion, you know what should have been a whole lot weirder is the "Terminator" robots.
在我看来,更奇怪的就是“终结者”机器人。
Hacking "The Matrix" is another thing that AI will do if you give it a chance.
要是有可能的话,人工智能还真会入侵“黑客帝国"。
So if you train an AI in a simulation,
如果你用仿真环境训练一个人工智能的话,
it will learn how to do things like hack into the simulation's math errors and harvest them for energy.
它会学习如何入侵到一个仿真环境中的数学错误里,并从中获得能量。
Or it will figure out how to move faster by glitching repeatedly into the floor.
或者会计算出如何通过不断地在地板上打滑来加快速度。
When you're working with AI, it's less like working with another human
当你和人工智能一起工作的时候,不太像是在跟另一个人一起工作,
and a lot more like working with some kind of weird force of nature.
而更像是在和某种奇怪的自然力量工作。
And it's really easy to accidentally give AI the wrong problem to solve,
一不小心就很容易让人工智能去破解错误的问题,
and often we don't realize that until something has actually gone wrong.
往往直到出现问题我们才察觉到不妥。
So here's an experiment I did
所以我做了这样的一个实验,
where I wanted the AI to copy paint colors, to invent new paint colors, given the list like the ones here on the left.
我想要让人工智能利用左边的颜色列表复制颜料颜色,去创造新的颜色。
And here's what the AI actually came up with.
这就是人工智能想到的结果。
So technically, it did what I asked it to.
基本上,它达到了我的要求。
I thought I was asking it for, like, nice paint color names,
我以为我给出的要求是,让它想出美好的颜色名,
but what I was actually asking it to do was just imitate the kinds of letter combinations that it had seen in the original.
但是实际上我让它做的只是单纯地模仿字母的组合,那些它在输入中见到的字母组合。
And I didn't tell it anything about what words mean,
而且我并没有告诉它这些单词的意思是什么,
or that there are maybe some words that it should avoid using in these paint colors.
或者告诉它也许有些单词不能用来给颜色命名。
So its entire world is the data that I gave it.
也就是说它的整个世界里只有我给出的数据。
Like with the ice cream flavors, it doesn't know about anything else.
正如让它发明冰淇淋的口味那样,它除此之外一无所知。
So it is through the data that we often accidentally tell AI to do the wrong thing.
也就是通过数据,我们常常不小心让人工智能做错事。
This is a fish called a tench.
这是一种叫丁鲷的鱼。
And there was a group of researchers who trained an AI to identify this tench in pictures.
一群研究者尝试过训练人工智能去识别图片里的丁鲷。
But then when they asked it what part of the picture it was actually using to identify the fish,
但是当他们试图搞清它到底用了图片的哪个部分去识别这种鱼,
here's what it highlighted. Yes, those are human fingers.
这是它所显示的部分。没错,那些是人类的手指。
Why would it be looking for human fingers if it's trying to identify a fish?
为什么它会去识别人类的手指,而不是鱼呢?
Well, it turns out that the tench is a trophy fish,
因为丁鲷实际上是一种战利品鱼,
and so in a lot of pictures that the AI had seen of this fish during training, the fish looked like this.
所以人工智能在被训练时,看过的大多数照片中鱼都长这样。
And it didn't know that the fingers aren't part of the fish.
而人工智能并不知道原来手指并不是鱼的一部分。
So you see why it is so hard to design an AI that actually can understand what it's looking at.
现在你们应该能想象,设计一个能真正懂得自己在做什么的人工智能是多么困难。
And this is why designing the image recognition in self-driving cars is so hard,
这也就是为什么给无人驾驶汽车设计图像识别技术那么困难,
and why so many self-driving car failures are because the AI got confused.
导致无人驾驶失败的原因就是,人工智能迷糊了。
I want to talk about an example from 2016.
接下来我想分享一个发生在2016年的故事。
There was a fatal accident when somebody was using Tesla's autopilot AI,
有人在使用特斯拉的自动驾驶功能时发生了特大事故,
but instead of using it on the highway like it was designed for, they used it on city streets.
因为这个人工智能是为上高速路而设计的,结果车主居然开到市内街道上。
And what happened was, a truck drove out in front of the car and the car failed to brake.
结果是,一辆卡车突然出现在轿车前面,而轿车没有刹车。
Now, the AI definitely was trained to recognize trucks in pictures.
当然这个人工智能受过训练,能识别图片中的卡车。
But what it looks like happened is the AI was trained to recognize trucks on highway driving,
但是当时的情况看起来,人工智能接受的训练是识别行驶在高速路上的卡车,
where you would expect to see trucks from behind.
理论上你看到的应该是卡车的尾部。
Trucks on the side is not supposed to happen on a highway, and so when the AI saw this truck,
而侧面对着你的卡车是不会出现在高速路上的,所以当人工智能看到这辆卡车的时候,
it looks like the AI recognized it as most likely to be a road sign and therefore, safe to drive underneath.
人工智能可能把卡车认作一个路标,因此,它判断从下面开过去是安全的。
Here's an AI misstep from a different field.
接下来是人工智能在另一个领域的错误示例。
Amazon recently had to give up on a résumé-sorting algorithm that they were working on
亚马逊最近不得不放弃一个他们已经开发了一段时间的简历分类的算法,
when they discovered that the algorithm had learned to discriminate against women.
因为他们发现这个算法竟然学会了歧视女性。
What happened is they had trained it on example résumés of people who they had hired in the past.
原因是当他们把过去招聘人员的简历用作人工智能的训练材料。
And from these examples, the AI learned to avoid the résumés of people who had gone to women's colleges
从这些素材中,人工智能学会了怎样过滤一些应聘者的简历,那些上过女子大学的,
or who had the word "women" somewhere in their resume, as in, "women's soccer team" or "Society of Women Engineers."
或者是那些含有“女性”字眼的简历,比如说“女子足球队”或者“女性工程师学会”。
The AI didn't know that it wasn't supposed to copy this particular thing that it had seen the humans do.
人工智能并不知道自己不应该复制他所见过的人类这种特定的行为。
And technically, it did what they asked it to do.
从技术层面上说,它的确按要求做到了。
They just accidentally asked it to do the wrong thing.
只是开发者不小心下错了指令。这
And this happens all the time with AI. AI can be really destructive and not know it.
样的情况在人工智能领域屡见不鲜。人工智能破坏力惊人且不自知。
So the AIs that recommend new content in Facebook, in YouTube, they're optimized to increase the number of clicks and views.
就如用于脸书和油管上内容推荐的人工智能,它们被优化以增加点击量和阅览量。
And unfortunately, one way that they have found of doing this is to recommend the content of conspiracy theories or bigotry.
但是不幸的是,它们实现目标的其中一个手段,就是推荐阴谋论或者偏执内容。
The AIs themselves don't have any concept of what this content actually is,
人工智能本身对这些内容没有概念,
and they don't have any concept of what the consequences might be of recommending this content.
也根本不知道推荐这样的内容会产生怎样的后果。
So, when we're working with AI, it's up to us to avoid problems.
所以当我们与人工智能一起工作的时候,我们有责任去规避问题。
And avoiding things going wrong, that may come down to the age-old problem of communication,
规避可能出错的因素,这也就带出一个老生常谈的沟通问题,
where we as humans have to learn how to communicate with AI.
作为人类,我们要学习怎样和人工智能沟通。
We have to learn what AI is capable of doing and what it's not,
我们必须明白人工智能能做什么,不能做什么,
and to understand that, with its tiny little worm brain, AI doesn't really understand what we're trying to ask it to do.
要明白,凭它们的那点小脑袋,人工智能并不能完全明白我们想让它们做什么。
So in other words, we have to be prepared to work with AI that's not the super-competent, all-knowing AI of science fiction.
换言之,我们必须对与人工智能共事做好准备,这可不是科幻片里那些全能全知的人工智能。
We have to be prepared to work with an AI that's the one that we actually have in the present day.
我们必须准备好跟眼下存在的人工智能共事。
And present-day AI is plenty weird enough. Thank you.
现在的人工智能还真的挺奇怪的。谢谢。