如何设计帮助人类而不是伤害人类的人工智能
日期:2018-06-18 11:36

(单词翻译:单击)

 MP3点击下载

I work on helping computers communicate about the world around us.
我的工作是协助电脑和我们周遭的世界交流。
There are a lot of ways to do this, and I like to focus on helping computers to talk about what they see and understand.
这么做有很多的方式,我喜欢聚焦在协助电脑来谈它们看见了什么、了解了什么。
Given a scene like this, a modern computer-vision algorithm can tell you that there's a woman and there's a dog.
给予一个这样的情景,一个现代电脑视觉演算法就能告诉你情景中有一个女子和一只狗。
It can tell you that the woman is smiling. It might even be able to tell you that the dog is incredibly cute.
它能告诉你,女子在微笑。它甚至可能可以告诉你,那只狗相当可爱。
I work on this problem thinking about how humans understand and process the world.
我处理这个问题时,脑中想的是人类如何了解和处理这个世界。
The thoughts, memories and stories that a scene like this might evoke for humans.
对人类而言,这样的情景有可能会唤起什么样的思想、记忆、故事。
All the interconnections of related situations.
相关情况的所有相互连结。
Maybe you've seen a dog like this one before, or you've spent time running on a beach like this one,
也许你以前看过像这样的狗,或者你曾花时间在像这样的海滩上跑步,
and that further evokes thoughts and memories of a past vacation, past times to the beach, times spent running around with other dogs.
那就会进一步唤起一个过去假期的思想和记忆,过去在海滩的时光,花在和其他狗儿到处奔跑的时间。
One of my guiding principles is that by helping computers to understand what it's like to have these experiences,
我的指导原则之一,是要协助电脑了解有这些经验是什么样的感觉,
to understand what we share and believe and feel,
去了解我们共有什么、相信什么、感受到什么,
then we're in a great position to start evolving computer technology in a way that's complementary with our own experiences.
那么,我们就有很好的机会可以开始让电脑科技以一种和我们自身经验互补的方式来演变。
So, digging more deeply into this,
所以,我更深入研究这主题,
a few years ago I began working on helping computers to generate human-like stories from sequences of images.
几年前,我开始努力协助电脑产生出像人类一样的故事,从一连串的影像来产生。
So, one day, I was working with my computer to ask it what it thought about a trip to Australia.
有一天,我在问我的电脑它对于去澳洲旅行有什么想法。
It took a look at the pictures, and it saw a koala.
它看了照片,看见一只考拉。
It didn't know what the koala was, but it said it thought it was an interesting-looking creature.
它不知道考拉是什么,但它说它认为这是一只看起来很有趣的生物。
Then I shared with it a sequence of images about a house burning down.
我和它分享了一系列都和房子被烧毁有关的图片。
It took a look at the images and it said, "This is an amazing view! This is spectacular!"
它看了那些图片,说:“这是好棒的景色!这好壮观!”
It sent chills down my spine. It saw a horrible, life-changing and life-destroying event and thought it was something positive.
它让我背脊发凉。它看着一个会改变人生、摧毁生命的可怕事件,却以为它是很正面的东西。
I realized that it recognized the contrast, the reds, the yellows, and thought it was something worth remarking on positively.
我意识到,它能认出对比反差、红色、黄色,然后就认为它是值得正面评论的东西。
And part of why it was doing this was because most of the images I had given it were positive images.
它这么做的部分原因,是因为我给它的大多数图片都是正面的图片。
That's because people tend to share positive images when they talk about their experiences.
那是因为人在谈论他们的经验时,本来就倾向会分享正面的图片。
When was the last time you saw a selfie at a funeral?
你上次看到在葬礼上的自拍照是何时?
I realized that, as I worked on improving AI task by task, dataset by dataset,
我意识到,当我努力在改善人工智能,一个任务一个任务、一个资料集一个资料集地改善,
that I was creating massive gaps, holes and blind spots in what it could understand.
结果我却在“它能了解什么”上创造出了大量的隔阂、漏洞以及盲点。
And while doing so, I was encoding all kinds of biases.
这么做的时候,我是在把各种偏见做编码。
Biases that reflect a limited viewpoint, limited to a single dataset
这些偏见反映出受限的观点,受限于单一资料集,
biases that can reflect human biases found in the data, such as prejudice and stereotyping.
这些偏见能反应出在资料中的人类偏见,比如偏袒以及刻板印象。
I thought back to the evolution of the technology that brought me to where I was that day
我回头去想一路带我走到那个时点的科技演变,
how the first color images were calibrated against a white woman's skin,
第一批彩色影像如何根据一个白种女子的皮肤来做校准,
meaning that color photography was biased against black faces.
这表示,彩色照片对于黑皮肤脸孔是有偏见的。
And that same bias, that same blind spot continued well into the '90s.
同样的偏见,同样的盲点,持续涌入了九十年代。
And the same blind spot continues even today in how well we can recognize different people's faces in facial recognition technology.
而同样的盲点甚至持续到现今,出现在我们对于不同人的脸部辨识能力中,在人脸辨识技术中。
I though about the state of the art in research today, where we tend to limit our thinking to one dataset and one problem.
我思考了现今在研究上发展水平,我们倾向会把我们的思路限制在一个资料集或一个问题上。
And that in doing so, we were creating more blind spots and biases that the AI could further amplify.
这么做时,我们就会创造出更多盲点和偏见,它们可能会被人工智能给放大。
I realized then that we had to think deeply about how the technology we work on today looks in five years, in 10 years.
那时,我意识到,我们必须要深入思考我们现今努力发展的科技,在五年、十年后会是什么样子。
Humans evolve slowly, with time to correct for issues in the interaction of humans and their environment.
人类进化得很慢,有时间可以去修正在人类互动以及其环境中的议题。
In contrast, artificial intelligence is evolving at an incredibly fast rate.
相对的,人工智能的进化速度非常快。
And that means that it really matters that we think about this carefully right now
那就意味着,很重要的是我们现在要如何仔细思考这件事,
that we reflect on our own blind spots, our own biases,
我们要反省我们自己的盲点,我们自己的偏见,
and think about how that's informing the technology we're creating and discuss what the technology of today will mean for tomorrow.
并想想它们带给我们所创造出的科技什么样的信息,并讨论现今的科技在将来代表的是什么含义。
CEOs and scientists have weighed in on what they think the artificial intelligence technology of the future will be.
对于未来的人工智能应该是什么样子,CEO们和科学家们的意见是很有分量的。
Stephen Hawking warns that "Artificial intelligence could end mankind."
史蒂芬·霍金警告称:“人工智能可能终结人类。”

如何设计帮助人类而不是伤害人类的人工智能

Elon Musk warns that it's an existential risk and one of the greatest risks that we face as a civilization.
伊隆·马斯克警告过,它是个生存风险,也是我们人类文明所面临最大的风险之一。
Bill Gates has made the point, "I don't understand why people aren't more concerned."
比尔·盖茨有个论点:“我不了解为什么人们不更关心一点。”
But these views -- they're part of the story.
但,这些看法,它们是故事的一部分。
The math, the models, the basic building blocks of artificial intelligence are something that we call access and all work with.
数学、模型,人工智能的基础材料是我们所有人都能够取得并使用的。
We have open-source tools for machine learning and intelligence that we can contribute to.
我们有机器学习和智能用的开放原始码工具,我们都能对其做出贡献。
And beyond that, we can share our experience.
除此之外,我们可以分享我们的经验。
We can share our experiences with technology and how it concerns us and how it excites us.
分享关于科技以及它如何影响我们、它如何让我们兴奋的经验。
We can discuss what we love.
我们可以讨论我们所爱的。
We can communicate with foresight about the aspects of technology that could be more beneficial or could be more problematic over time.
我们能带着远见来交流,谈谈关于科技有哪些方面,随着时间发展可能可以更有助益或可能产生问题。
If we all focus on opening up the discussion on AI with foresight towards the future,
如果我们都能把焦点放在开放地带着对未来的远见来讨论人工智能,
this will help create a general conversation and awareness about what AI is now,
这就能创造出一般性的谈话和意识,关于人工智能现在是什么样子、
what it can become and all the things that we need to do in order to enable that outcome that best suits us.
它未来可以变成什么样子,以及所有我们需要做的事,以产生出最适合我们的结果。
We already see and know this in the technology that we use today. We use smart phones and digital assistants and Roombas.
我们已经在现今我们所使用的科技中看见这一点了。我们用智能手机、数字助理以及扫地机器人。
Are they evil? Maybe sometimes. Are they beneficial? Yes, they're that, too. And they're not all the same.
它们邪恶吗?也许有时候。它们有助益吗?是的,这也是事实。并且它们并非全都一样的。
And there you already see a light shining on what the future holds.
你们已经看到未来可能性的一丝光芒。
The future continues on from what we build and create right now.
未来延续的基础,是我们现在所建立和创造的。
We set into motion that domino effect that carves out AI's evolutionary path.
我们开始了骨牌效应,刻划出了人工智能的进化路径。
In our time right now, we shape the AI of tomorrow.
我们在现在这个时代,塑造了未来的人工智能。
Technology that immerses us in augmented realities bringing to life past worlds.
让我们能沉浸入增强现实中的科技,让过去的世界又活了过来。
Technology that helps people to share their experiences when they have difficulty communicating.
在沟通困难时,还能帮助人们分享经验的科技。
Technology built on understanding the streaming visual worlds used as technology for self-driving cars.
还有基于理解流媒体视觉世界之上的科技,被用来当作自动驾驶汽车的科技。
Technology built on understanding images and generating language,
还有基于理解图像和产生语言的科技,
evolving into technology that helps people who are visually impaired be better able to access the visual world.
进化成协助视觉损伤者的科技,让他们更能进入视觉的世界。
And we also see how technology can lead to problems.
我们也看到了科技是如何带来问题的。
We have technology today that analyzes physical characteristics we're born with
现今,我们有能够分析我们天生的身体特征的科技,
such as the color of our skin or the look of our face -- in order to determine whether or not we might be criminals or terrorists.
比如肤色或面部的外观,可以用来判断我们是否有可能是罪犯或恐怖份子。
We have technology that crunches through our data, even data relating to our gender or our race,
我们有科技能够分析我们的资料,甚至和我们的性别或种族相关的资料,
in order to determine whether or not we might get a loan.
来决定我们的贷款是否能被核淮。
All that we see now is a snapshot in the evolution of artificial intelligence.
我们现在所看见的一切,都是人工智能进化的约略写照。
Because where we are right now, is within a moment of that evolution.
因为我们现在所处的位置,是在那进化的一个时刻当中。
That means that what we do now will affect what happens down the line and in the future.
这意味着,我们现在所做的,会影响到后续未来发生的事。
If we want AI to evolve in a way that helps humans, then we need to define the goals and strategies that enable that path now.
如果我们想让人工智能的演进方式是对人类有助益的,那么我们现在就得要定义目标和策略,来让那条路成为可能。
What I'd like to see is something that fits well with humans, with our culture and with the environment.
我想要看见的东西是要能够和人类、我们的文化及我们的环境非常符合的东西。
Technology that aids and assists those of us with neurological conditions or other disabilities
这种科技要能够帮助和协助有神经系统疾病或其他残疾者的人,
in order to make life equally challenging for everyone.
让人生对于每个人的挑战程度是平等的。
Technology that works regardless of your demographics or the color of your skin.
这种科技的运作不会考量你的人口统计资料或你的肤色。
And so today, what I focus on is the technology for tomorrow and for 10 years from now.
所以,现今我关注的是明日的科技和十年后的科技。
AI can turn out in many different ways.
产生人工智能的方式相当多。
But in this case, it isn't a self-driving car without any destination.
但在这个情况中,它并不是没有目的地的自动驾驶汽车。
This is the car that we are driving. We choose when to speed up and when to slow down.
它是我们在开的汽车。我们选择何时要加速何时要减速。
We choose if we need to make a turn. We choose what the AI of the future will be.
我们选择是否要转弯。我们选择将来的人工智能会是哪一种。
There's a vast playing field of all the things that artificial intelligence can become. It will become many things.
人工智能能够变成各式各样的东西。它会变成许多东西。
And it's up to us now, in order to figure out what we need to put in place
现在,决定权在我们,我们要想清楚我们得要准备什么,
to make sure the outcomes of artificial intelligence are the ones that will be better for all of us. Thank you.
来确保人工智能的结果会是对所有人都更好的结果。谢谢。

分享到