TED演讲(视频+MP3+双语字幕):我们能在打造人工智能的同时掌握控制权吗?(2)
日期:2017-11-08 09:32

(单词翻译:单击)

听力文本

It's as though we stand before two doors. Behind door number one, we stop making progress in building intelligent machines.
我们就像站在了两扇门前。在第一扇门后面,我们停下打造智能机器的脚步。
Our computer hardware and software just stops getting better for some reason.
某些原因也使我们停止了对电脑软件和硬件的升级。
Now take a moment to consider why this might happen.
现在让我们想一下为什么会这样。
I mean, given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to.
我的意思是,当我们认识到智能和自动化不可估量的价值时,我们总会竭尽所能的改善这些科技。
What could stop us from doing this? A full-scale nuclear war? A global pandemic? An asteroid impact?
那么,什么会使我们停下脚步呢?一场大规模的核战争?一次全球性的瘟疫?一个小行星撞击了地球?
Justin Bieber becoming president of the United States?
或者是贾斯汀·比伯成为了美国总统?
The point is, something would have to destroy civilization as we know it.
重点是,总有一个事物会摧毁人类现有的文明。

我们能在打造人工智能的同时掌握控制权吗

You have to imagine how bad it would have to be to prevent us from making improvements in our technology permanently, generation after generation.
你需要思考这个灾难究竟有多恐怖,才会永久性地阻止我们发展科技,永久性的。
Almost by definition, this is the worst thing that's ever happened in human history.
光想想它,就觉得这将是人类历史上能发生的最惨绝人寰的事了。
So the only alternative, and this is what lies behind door number two,
那么,我们唯一剩下的选择,就藏在第二扇门的后面,
is that we continue to improve our intelligent machines year after year after year.
那就是我们持续改进我们的智能机器,永不停歇。
At a certain point, we will build machines that are smarter than we are,
在将来的某一天,我们会造出比我们更聪明的机器,
and once we have machines that are smarter than we are, they will begin to improve themselves.
一旦我们有了比我们更聪明的机器,它们将进行自我改进。
And then we risk what the mathematician IJ Good called an "intelligence explosion," that the process could get away from us.
然后我们就会承担着数学家IJ Good所说的“智能爆炸”的风险,(科技进步的)进程将不再受我们的控制。

演讲介绍


分享到