当电脑比我们还聪明时会发生什么?
日期:2017-06-18 10:16

(单词翻译:单击)

 MP3点击下载

I work with a bunch of mathematicians, philosophers and computer scientists,
我和一些数学家、哲学家和电脑学家一起工作,
and we sit around and think about the future of machine intelligence, among other things.
我们会坐在一起思考未来的机械智能和其他的一些事情。
Some people think that some of these things are sort of science fiction-y, far out there, crazy.
有的人认为这类事情只是科幻,不切实际,很疯狂。
But I like to say, okay, let's look at the modern human condition.
但是我想说,好吧,那我们来看看人类现状吧。
This is the normal way for things to be.
这是世间一种常态。
But if we think about it, we are actually recently arrived guests on this planet, the human species.
但是如果我们去思考,我们人类其实相当晚才出现在这个星球上。
Think about if Earth was created one year ago, the human species, then, would be 10 minutes old.
想一想,如果地球是一年前才被创造的,那么人类10分钟前才出现。
The industrial era started two seconds ago.
然后工业时代两秒钟前刚刚开始。
Another way to look at this is to think of world GDP over the last 10,000 years,
另一种看待这件事的方式是去想一下在过去一万年间的世界GDP状况。
I've actually taken the trouble to plot this for you in a graph. It looks like this.
我其实真的试着去做了一个统计图。就是这样。
It's a curious shape for a normal condition. I sure wouldn't want to sit on it.
正常情况下,这是个令人好奇的形状。我确定我不想坐在上面。
Let's ask ourselves, what is the cause of this current anomaly? Some people would say it's technology.
让我们扪心自问,到底是什么造成了如此不寻常的现状?一些人会说因为科技。
Now it's true, technology has accumulated through human history,
对于现在来说是对的,科技是人类历史不断积累下来的果实。
and right now, technology advances extremely rapidly -- that is the proximate cause, that's why we are currently so very productive.
现在,科技发展十分迅速:这是个直接原因,这就是为什么我们现在生产效率如此高。
But I like to think back further to the ultimate cause.
但是我想探究更远的在未来的终极原因。
Look at these two highly distinguished gentlemen:
看这两个非常不同的男士:
We have Kanzi -- he's mastered 200 lexical tokens, an incredible feat.
这是Kanzi,他已经掌握了200个词法标记,一个难以置信的成就。
And Ed Witten unleashed the second superstring revolution.
Ed Witten开创了第二个令人惊人的创新。
If we look under the hood, this is what we find: basically the same thing.
如果我们去看这些事物的本质,这是我们的发现:全都是一样的。
One is a little larger, it maybe also has a few tricks in the exact way it's wired.
一个稍微大了一点,也许它有一些特殊的技巧。
These invisible differences cannot be too complicated, however,
但是,这些隐形的不同并没有很错综复杂,
because there have only been 250,000 generations since our last common ancestor.
因为在我们和我们的祖先之间只有25万代人。
We know that complicated mechanisms take a long time to evolve.
我们知道复杂的机制需要很长的时间来进化得到。
So a bunch of relatively minor changes take us from Kanzi to Witten,
所以,一些相对小的变化,让我们从Kanzi变成了Witten,
from broken-off tree branches to intercontinental ballistic missiles.
从捡起掉下的树枝作为武器,到发射洲际导弹。
So this then seems pretty obvious that everything we've achieved, and everything we care about,
因此,至今我们所办到的所有事情,以及我们所关心的事情,
depends crucially on some relatively minor changes that made the human mind.
都取决于人大脑中细小的变化。
And the corollary, of course, is that any further changes
因此得出的结论是:在未来,
that could significantly change the substrate of thinking could have potentially enormous consequences.
任何显著的思考基体的变化,都能带来巨大的后果。
Some of my colleagues think we're on the verge of something
我的一些同事觉得我们即将会发明
that could cause a profound change in that substrate, and that is machine superintelligence.
足以深深地改变人类思考模式的科技。就是超级机能智慧。
Artificial intelligence used to be about putting commands in a box.
以前的人工智慧是把指令输入到一个箱子里。
You would have human programmers that would painstakingly handcraft knowledge items.
你需要人类程序员来努力把知识转变成程序。
You build up these expert systems, and they were kind of useful for some purposes,
你会建立起一些专业系统,它们有时候会有帮助,
but they were very brittle, you couldn't scale them.
但是它们很生硬,你不能延展它们的功能。
Basically, you got out only what you put in.
基本上你只能得到你放进去的东西。
But since then, a paradigm shift has taken place in the field of artificial intelligence.
但是自从那时候开始,人工智能的领域发生了巨大的改变。
Today, the action is really around machine learning.
现在主要的研究方向是机器的学习。
So rather than handcrafting knowledge representations and features,
所以,与其设计出知识的再现,
we create algorithms that learn, often from raw perceptual data. Basically the same thing that the human infant does.
我们不如写出具有从原始感官数据学习的程序,像婴儿一样。
The result is A.I. that is not limited to one domain
结果就不会局限于某个领域的人工智能:
the same system can learn to translate between any pairs of languages, or learn to play any computer game on the Atari console.
同一个系统可以学习两种语言之间的翻译,或者学着玩Atari的游戏。
Now of course, A.I. is still nowhere near having the same powerful, cross-domain ability to learn and plan as a human being has.
当然,现在人工智能还未能达到向人类一样,具有强大的跨领域学习能力。
The cortex still has some algorithmic tricks that we don't yet know how to match in machines.
人类大脑还具有一些运算技巧,可是我们不知道如何将这些技巧用于机器。
So the question is, how far are we from being able to match those tricks?
所以我们现在需要问:我们还要多久才可以让机器复制这种能力?
A couple of years ago, we did a survey of some of the world's leading A.I. experts,
几年前,我们对世界顶尖的人工智能专家做了一次问卷调查
to see what they think, and one of the questions we asked was,
来收集他们的想法,其中一道题目是:
"By which year do you think there is a 50 percent probability that we will have achieved human-level machine intelligence?"
“到哪一年你觉得人类会有50%的可能性创造达到人类水平的人工智能?”
We defined human-level here as the ability to perform almost any job at least as well as an adult human,
我们把这样的人工智能定义为有能力将任何任务完成得至少和一名成年人一样好。
so real human-level, not just within some limited domain.
所以是真正的人类级别,而不是仅限于一些领域。
And the median answer was 2040 or 2050, depending on precisely which group of experts we asked.
而答案的中位数是2040到2050年,取决于这些专家的群体。
Now, it could happen much, much later, or sooner, the truth is nobody really knows.
当然这个有可能要过很久才能实现,也有可能提前实现。没有人知道确切的时间。
What we do know is that the ultimate limit to information processing in a machine substrate lies far outside the limits in biological tissue.
我们知道的是,机器处理信息的能力的最终点,比任何生物组织要大很多。
This comes down to physics. A biological neuron fires, maybe, at 200 hertz, 200 times a second.
这取决与物理原理。一个生物神经元所发出的脉冲频率大约位于200赫兹,每秒200次。
But even a present-day transistor operates at the Gigahertz.
但是就算是现在的电晶体都以千兆赫的频率运行。
Neurons propagate slowly in axons, 100 meters per second, tops.
神经元在轴突中传输的速度较慢,最多100米每秒。
But in computers, signals can travel at the speed of light.
但在电脑里,信号是以光速传播的。
There are also size limitations, like a human brain has to fit inside a cranium,
另外还有尺寸的限制,就像人类的大脑只能有颅骨那么大,
but a computer can be the size of a warehouse or larger.
但是一个电脑可以和仓库一样大,甚至更大。
So the potential for superintelligence lies dormant in matter
因此超级智慧的潜能正潜伏在物质之中,
much like the power of the atom lay dormant throughout human history, patiently waiting there until 1945.
就像原子能潜伏在人类历史中一样,直到1945年。
In this century, scientists may learn to awaken the power of artificial intelligence.
在这个世纪里,科学家可能能将人工智慧的力量唤醒。
And I think we might then see an intelligence explosion.
那时候我觉得我们会看到智慧大爆发。
Now most people, when they think about what is smart and what is dumb,
大部分的人,当他们想什么是聪明什么是笨的时候,
I think have in mind a picture roughly like this.
他们脑子里的画面是这样的:
So at one end we have the village idiot, and then far over at the other side we have Ed Witten,
一边是村子里的傻子,一边是Ed Witten
or Albert Einstein, or whoever your favorite guru is.
或是Albert Einstein,或者其他大师。
But I think that from the point of view of artificial intelligence, the true picture is actually probably more like this:
但是我觉得从人工智能的观点来看,真正的画面也许是这样:
AI starts out at this point here, at zero intelligence, and then,
人工智能从这一点开始,零智慧。然后,
after many, many years of really hard work, maybe eventually we get to mouse-level artificial intelligence,
在许多许多辛劳工作后,也许最终我们能达到老鼠级别的智慧,
something that can navigate cluttered environments as well as a mouse can.
能在混乱中找到开出一条道路,像一只老鼠一样。
And then, after many, many more years of really hard work, lots of investment,
之后,在更多更多年的辛苦研究和投资之后,
maybe eventually we get to chimpanzee-level artificial intelligence.
也许最终我们能到达黑猩猩级人工智能。
And then, after even more years of really, really hard work, we get to village idiot artificial intelligence.
在后来,更多年的研究之后,我们能够达到村里的傻子级别的人工智能。
And a few moments later, we are beyond Ed Witten. The train doesn't stop at Humanville Station.
在一段时间之后,我们能超越Ed Witten。这列火车不会在“人类站”就停下。
It's likely, rather, to swoosh right by.
它比较可能会呼啸而过。
Now this has profound implications, particularly when it comes to questions of power.
现在这个有深远的寓意,尤其是当我们谈到力量的时候。
For example, chimpanzees are strong -- pound for pound, a chimpanzee is about twice as strong as a fit human male.
比如,黑猩猩很强壮:同等的体重,一个黑猩猩是两个健康男性那么强壮。
And yet, the fate of Kanzi and his pals depends a lot more on what we humans do than on what the chimpanzees do themselves.
然而,Kanzi和他的朋友们的命运更多取决于我们人类能做到什么,而不是猩猩能做到什么。
Once there is superintelligence, the fate of humanity may depend on what the superintelligence does.
当超级智慧出现的时候,人类的命运也许会取决于那个超级智慧体要做什么。
Think about it: Machine intelligence is the last invention that humanity will ever need to make.
想一想:机器智慧是人类需要创造的最后一个东西。
Machines will then be better at inventing than we are, and they'll be doing so on digital timescales.
机器在那之后会比我们更擅长创造,他们也会在数位时间里这样做。
What this means is basically a telescoping of the future.
这意味着一个被缩短的未来。
Think of all the crazy technologies that you could have imagined
想一下你曾想象过的所有的疯狂的科技,
maybe humans could have developed in the fullness of time:
也许人类可以在适当的时候完成:
cures for aging, space colonization, self-replicating nanobots or uploading of minds into computers,
终结衰老、宇宙殖民、自我复制的纳米机器人和大脑到电脑的传输,
all kinds of science fiction-y stuff that's nevertheless consistent with the laws of physics.
诸如此类的看似仅存在于科幻却又同时符合物理法则的元素。
All of this superintelligence could develop, and possibly quite rapidly.
超级智慧有办法开发出这些东西,也许更快。
Now, a superintelligence with such technological maturity would be extremely powerful,
现在,一个拥有如此成熟科技的超级智慧体将会是非常强大的,
and at least in some scenarios, it would be able to get what it wants.
至少在一些情况下,它能得到它想要的东西。
We would then have a future that would be shaped by the preferences of this A.I.
我们的未来就将会被这个超级智慧体的喜好所主宰。

当电脑比我们还聪明时会发生什么?

Now a good question is, what are those preferences? Here it gets trickier.
现在的问题就是,这些喜好是什么呢?这很棘手。
To make any headway with this, we must first of all avoid anthropomorphizing.
要在这个领域取得进步,我们必须避免将机器智慧人格化。
And this is ironic because every newspaper article about the future of A.I. has a picture of this:
这一点很讽刺,因为每一个关于人工智能的未来的新闻报道,都会有这个图片:
So I think what we need to do is to conceive of the issue more abstractly, not in terms of vivid Hollywood scenarios.
所以我觉得我们必须用更抽象的方法看待这个问题,而不是在好莱坞电影的叙事之下。
We need to think of intelligence as an optimization process,
我们需要把智慧看做是一个优化的过程,
a process that steers the future into a particular set of configurations.
一个能把未来引导至一个特殊组合结构的过程。
A superintelligence is a really strong optimization process.
一个超级智慧体是一个非常强大的优化过程。
It's extremely good at using available means to achieve a state in which its goal is realized.
它将擅长利用资源来达到自己的目标。
This means that there is no necessary conenction between being highly intelligent in this sense,
这意味着有着高智慧
and having an objective that we humans would find worthwhile or meaningful.
和拥有一个对人类来说有用的目标之间,并没有必然的联系。
Suppose we give an A.I. the goal to make humans smile.
假设我们给予人工智慧的目的是让人笑,
When the A.I. is weak, it performs useful or amusing actions that cause its user to smile.
当人工智能较弱的时候,它能做出有用或好笑的表演,这样它的使用者就会笑了。
When the A.I. becomes superintelligent, it realizes that there is a more effective way to achieve this goal:
当人工智能变成超级智慧体的时候,它会意识到有一个更有效的办法能达到这个效果:
take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins.
控制世界,在人类面部肌肉上插入电极,来让人类不断地笑。
Another example, suppose we give A.I. the goal to solve a difficult mathematical problem.
另一个例子:假设我们给予人工智能的目标是解出很难的数学题。
When the A.I. becomes superintelligent, it realizes that the most effective way to get the solution to this problem
当人工智能变成超级智慧体的时候,它意识到有一个更有效的办法来解出问题,
is by transforming the planet into a giant computer, so as to increase its thinking capacity.
那就是把整个地球变成一个巨型电脑,这样它的运算能力就变更强大了。
And notice that this gives the A.I.s an instrumental reason to do things to us that we might not approve of.
注意到这一点,是给予人工智能一个模式型的理由,来做我们也许并不认可的事情。
Human beings in this model are threats, we could prevent the mathematical problem from being solved.
人类在这个模式中是威胁,我们可以人为地让这个数学问题不被解出。
Of course, perceivably things won't go wrong in these particular ways; these are cartoon examples.
当然了,我们预见这种事情不会错到这样的地步,这些是夸张的例子。
But the general point here is important:
但是它们所代表的主旨很重要:
if you create a really powerful optimization process to maximize for objective x,
如果你创造了一个非常强大的优化过程来最大化目标X,
you better make sure that your definition of x incorporates everything you care about.
你最好保证你的意义上的X包括了所有你在乎的事情。
This is a lesson that's also taught in many a myth.
这是一个很多神话故事中都在传递的寓意。
King Midas wishes that everything he touches be turned into gold.
Midas国王希望他碰到的所有东西都能变成金子。
He touches his daughter, she turns into gold. He touches his food, it turns into gold.
他碰到了他的女儿,她于是变成了金子。他碰到了食物,于是食物变成了金子。
This could become practically relevant, not just as a metaphor for greed,
这个故事和我们的话题息息相关,并不只是因为它隐藏在对贪婪的暗喻,
but as an illustration of what happens if you create a powerful optimization process and give it misconceived or poorly specified goals.
也是因为它指出了,如果你创造出来一个强大的优化过程并且给了一个错误的或者不精确的目标,其后果会是什么。
Now you might say, if a computer starts sticking electrodes into people's faces, we'd just shut it off.
现在也许你会说,如果一个电脑开始在人类脸上插电极,我们会关掉它。
A, this is not necessarily so easy to do if we've grown dependent on the system -- like, where is the off switch to the Internet?
第一,这不是一件容易事,如果我们变得非常依赖这个系统:比如,你知道互联网的开关在哪吗?
B, why haven't the chimpanzees flicked the off switch to humanity, or the Neanderthals?
第二,为什么当初黑猩猩没有关掉人类的开关,或者尼安德特人的开关?
They certainly had reasons. We have an off switch, for example, right here.
他们肯定有理由。我们有一个开关,比如这里。
The reason is that we are an intelligent adversary; we can anticipate threats and plan around them.
之所以我们是聪明的敌人,因为我们可以预见到威胁并且尝试避免它。
But so could a superintelligent agent, and it would be much better at that than we are.
但是一个超级智慧体也可以,而且会做得更好。
The point is, we should not be confident that we have this under control here.
我们不应该很自信地表示我们能控制所有事情。
And we could try to make our job a little bit easier by, say, putting the A.I. in a box,
为了把我们的工作变得更简单一点,我们应该试着,比如把人工智能放进一个小盒子,
like a secure software environment, a virtual reality simulation from which it cannot escape.
想一个保险的软件环境,一个它无法逃脱的虚拟现实模拟器。
But how confident can we be that the A.I. couldn't find a bug.
但是我们有信心它不可能会发现一个漏洞吗?
Given that merely human hackers find bugs all the time, I'd say, probably not very confident.
考虑到连人类黑客每时每刻都能发现网络漏洞,我会说,也许不是很有信心。
So we disconnect the ethernet cable to create an air gap, but again,
所以我们断开以太网的链接来创建一个空隙,但是重申一遍,
like merely human hackers routinely transgress air gaps using social engineering.
人类黑客都可以一次又一次以社会工程跨越这样的空隙。
Right now, as I speak, I'm sure there is some employee out there somewhere
现在,在我说话的时候,我肯定在这边的某个雇员,
who has been talked into handing out her account details by somebody claiming to be from the I.T. department.
曾近被要求交出他的账户明细,给一个自称是信息技术部门的人。
More creative scenarios are also possible, like if you're the A.I.,
其他的情况也有可能,比如如果你是人工智能,
you can imagine wiggling electrodes around in your internal circuitry to create radio waves that you can use to communicate.
你可以想象你用在你的体内环绕复杂缠绕的电极,创造出一种无线电波来交流。
Or maybe you could pretend to malfunction,
或者也许你可以假装你出了问题,
and then when the programmers open you up to see what went wrong with you,
然后程序师就把你打开看看哪里出错了,
they look at the source code -- Bam! -- the manipulation can take place.
他们找出了源代码--嘭--你就可以取得控制权了。
Or it could output the blueprint to a really nifty technology, and when we implement it,
或者它可以做出一个非常漂亮的科技蓝图,当我们实现之后,
it has some surreptitious side effect that the A.I. had planned.
它有一些被人工智能计划好的秘密的副作用。
The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever.
所以我们不能对我们能够永远控制一个超级智能体的能力过度自信。
Sooner or later, it will out.
在不久后,它会逃脱出来。
I believe that the answer here is to figure out how to create superintelligent A.I. such that
我相信我们需要弄明白如何创造出超级人工智能体,
even if -- when -- it escapes, it is still safe because it is fundamentally on our side because it shares our values.
哪怕它逃走了,它仍然是无害的,因为它是我们这一边的,因为它有我们的价值观。
I see no way around this difficult problem.
我认为这是个不可避免的问题。
Now, I'm actually fairly optimistic that this problem can be solved.
现在,我对这个问题能否被解决保持乐观。
We wouldn't have to write down a long list of everything we care about,
我们不需要写下所有我们在乎的事情,
or worse yet, spell it out in some computer language like C++ or Python, that would be a task beyond hopeless.
或者更糟地,把这些事情变成计算机语言,C++或者Python,这是个不可能的任务。
Instead, we would create an A.I. that uses its intelligence to learn what we value,
而是我们会创造出一个人工智能机器人,用它自己的智慧来学习我们的价值观,
and its motivation system is constructed in such a way that it is motivated to pursue our values
它的激励制度可以激励它来追求我们的价值观
or to perform actions that it predicts we would approve of.
或者去做我们会赞成的事情。
We would thus leverage its intelligence as much as possible to solve the problem of value-loading.
我们会因此最大化地提高它的智力,来解决富有价值的问题。
This can happen, and the outcome could be very good for humanity. But it doesn't happen automatically.
这是有可能的,结果可以使人类非常受益。但它不是自动发生的。
The initial conditions for the intelligence explosion might need to be set up in just the right way
智慧大爆炸的初始条件需要被正确地建立起来,
if we are to have a controlled detonation.
如果我们想要一切在掌握之中。
The values that the A.I. has need to match ours,
人工智能的价值观要和我们的价值观相辅相成,
not just in the familiar context, like where we can easily check how the A.I. behaves,
不只是在熟悉的情况下,比如当我们能很容易检查它的行为的时候,
but also in all novel contexts that the A.I. might encounter in the indefinite future.
但也要在所有人工智能在没有界限的未来可能会遇到的情况下,与我们的价值观相辅相成。
And there are also some esoteric issues that would need to be solved, sorted out:
也有很多深奥的问题需要被分拣解决:
the exact details of its decision theory, how to deal with logical uncertainty and so forth.
它如何做决定,如何解决逻辑不确定性和类似的情况。
So the technical problems that need to be solved to make this work look quite difficult
所以技术上的待解决问题让这个任务看起来有些困难:
not as difficult as making a superintelligent A.I., but fairly difficult.
并没有像做出一个超级智慧体一样困难,但是还是很难。
Here is the worry: Making superintelligent A.I. is a really hard challenge.
这是我们所担心的:创造出一个超级智慧体确实是个很大的挑战。
Making superintelligent A.I. that is safe involves some additional challenge on top of that.
创造出一个安全的超级智慧体,是个更大的挑战。
The risk is that if somebody figures out how to crack the first challenge
风险是,如果有人有办法解决第一个难题,
without also having cracked the additional challenge of ensuring perfect safety.
却无法解决第二个确保安全性的挑战。
So I think that we should work out a solution to the control problem in advance,
所以我认为我们应该预先想出“控制性”的解决方法,
so that we have it available by the time it is needed.
这样我们就能在需要的时候用到它了。
Now it might be that we cannot solve the entire control problem in advance
现在也许我们并不能预先解决全部的控制性问题,
because maybe some elements can only be put in place once you know the details of the architecture where it will be implemented.
因为有些因素需要你了解,你要应用到的那个构架的细节才能实施。
But the more of the control problem that we solve in advance,
但如果我们能解决更多控制性的难题,
the better the odds that the transition to the machine intelligence era will go well.
当我们迈入机器智能时代后就能更加顺利。
This to me looks like a thing that is well worth doing and I can imagine that if things turn out okay,
这对于我来说是个值得一试的东西,而且我能想象,如果一切顺利,
that people a million years from now look back at this century
几百万年后的人类回首我们这个世纪,
and it might well be that they say that the one thing we did that really mattered was to get this thing right. Thank you.
他们也许会说,我们所做的最最重要的事情,就是做了这个正确的决定。谢谢。

分享到
重点单词
  • particularadj. 特殊的,特别的,特定的,挑剔的 n. 个别项目
  • humanityn. 人类,人性,人道,慈爱,(复)人文学科
  • metaphorn. 隐喻,暗喻
  • warehousen. 仓库 vt. 存入仓库
  • manipulationn. 操纵,控制,窜改
  • currentn. (水、气、电)流,趋势 adj. 流通的,现在的,
  • evolvev. 进展,进化,展开
  • revolutionn. 革命,旋转,转数
  • minoradj. 较小的,较少的,次要的 n. 未成年人,辅修科
  • internaladj. 国内的,内在的,身体内部的