TED演讲(视频+MP3+双语字幕):如何赋予AI力量而不是被它压倒(8)
日期:2020-02-24 08:44

(单词翻译:单击)

听力文本

So don't get me wrong here.
别误会我的意思。
I'm not talking about space travel, merely about humanity's metaphorical journey into the future.
我不是在谈论太空旅行,只是打个比方,人类进入未来的这个旅程。
So one option that some of my AI colleagues like is to build superintelligence and keep it under human control,
我的一些研究人工智能的同事很喜欢的一个选择就是打造人工智能,并确保它被人类所控制,
like an enslaved god, disconnected from the internet and used to create unimaginable technology and wealth for whoever controls it.
就像被奴役起来的神一样,网络连接被断开,为它的操控者创造出无法想象的科技和财富。
But Lord Acton warned us that power corrupts, and absolute power corrupts absolutely,
但是艾克顿勋爵警告过我们,权力会带来腐败,绝对的权力终将带来绝对的腐败,
so you might worry that maybe we humans just aren't smart enough, or wise enough rather, to handle this much power.
所以也许你会担心我们人类就是还不够聪明,或者不够智慧,无法妥善处理过多的权力。
Also, aside from any moral qualms you might have about enslaving superior minds,
还有,除了对于奴役带来的优越感,你可能还会产生道德上的忧虑,
you might worry that maybe the superintelligence could outsmart us, break out and take over.
你也许会担心人工智能能够在智慧上超越我们,奋起反抗,并取得我们的控制权。

如何赋予AI力量而不是被它压倒

But I also have colleagues who are fine with AI taking over and even causing human extinction,
但是我也有同事认为,让人工智能来操控一切也无可厚非,造成人类灭绝也无妨,
as long as we feel the the AIs are our worthy descendants, like our children.
只要我们觉得人工智能配得上成为我们的后代,就像是我们的孩子。
But how would we know that the AIs have adopted our best values
但是我们如何才能知道人工智能汲取了我们最好的价值观,
and aren't just unconscious zombies tricking us into anthropomorphizing them?
而不是只是一个无情的僵尸,让我们误以为它们有人性?
Also, shouldn't those people who don't want human extinction have a say in the matter, too?
此外,那些绝对不想看到人类灭绝的人,对此应该也有话要说吧?
Now, if you didn't like either of those two high-tech options,
如果这两个高科技的选择都不是你所希望的,
it's important to remember that low-tech is suicide from a cosmic perspective,
请记得,从宇宙历史的角度来看,低级的科技如同自杀,
because if we don't go far beyond today's technology, the question isn't whether humanity is going to go extinct,
因为如果我们不能远远超越今天的科技,问题就不再是人类是否会灭绝,
merely whether we're going to get taken out by the next killer asteroid, supervolcano
而是让我们灭绝的会是下一次巨型流星撞击地球,还是超级火山爆发,
or some other problem that better technology could have solved.
亦或是一些其他本该可以由更好的科技来解决的问题。

演讲介绍

许多人工智能研究人员预计,在未来几十年内,人工智能将在所有任务和工作中超越人类,从而使我们的未来只受到物理定律的限制,而不是我们的智力极限。麻省理工学院物理学家和人工智能研究员马克斯·特格马克把真正的机会和威胁从神话中分离出来,描述了我们今天应该采取的具体步骤,以确保人工智能最终成为人类有史以来发生的最好的,而不是最坏的事情。


分享到