(单词翻译:单击)
Dozens of scientists, entrepreneurs and investors involved in the field of artificial intelligence, including Stephen Hawking and Elon Musk, have signed an open letter warning that greater focus is needed on its safety and social benefits.
数十位科学家、企业家及与人工智能领域有关的投资者联名发出了一封公开信,警告人们必须更多地注意人工智能(AI)的安全性及其社会效益。参加联署的人中包括了科学家史蒂芬•霍金(Stephen Hawking)及企业家埃伦•马斯克(Elon Musk)。
The letter and an accompanying paper from the Future of Life Institute, which suggests research priorities for “robust and beneficial” artificial intelligence, come amid growing nervousness about the impact on jobs or even humanity’s long-term survival from machines whose intelligence and capabilities could exceed those of the people who created them.
这封发自生命未来研究所(Future of Life Institute,简称FLI)的公开信还附带了一篇论文,其中建议应优先研究“强大而有益”的人工智能。目前,人们日益担心机器的智力和能力可能会超过创造它们的人类,从而影响到人类的就业,甚至影响到人类的长期生存。
“Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls,” the FLI’s letter says. “Our AI systems must do what we want them to do.”
这封FLI的公开信表示:“由于人工智能的巨大潜力,开展如何在规避其潜在陷阱的同时获取其好处的研究十分重要。我们的人工智能系统,必须按照我们的意愿工作。”
The FLI was founded last year by volunteers including Jaan Tallinn, a co-founder of Skype, to stimulate research into “optimistic visions of the future” and to “mitigate existential risks facing humanity”, with a focus on those arising from the development of human-level artificial intelligence.
FLI去年由包括Skype联合创始人让•塔林(Jaan Tallinn)在内的志愿者创立。成立该研究所的目的一方面是为了促进对“未来乐观图景”的研究,一方面则是为了“降低人类面临的现存风险”。这其中,在开发与人类相当的人工智能技术过程中出现的那些风险,将是该所关注的重点。
Mr Musk, the co-founder of SpaceX and Tesla, who sits on the FLI’s scientific advisory board alongside actor Morgan Freeman and cosmologist Stephen Hawking, has said that he believes uncontrolled artificial intelligence is “potentially more dangerous than nukes”.
SpaceX和特斯拉(Tesla)的共同创始人马斯克、著名演员摩根•弗里曼(Morgan Freeman)以及宇宙学家史蒂芬•霍金都是FLI科学顾问委员会的委员。马斯克表示,他相信不受控制的人工智能“可能比核武器更危险”。
Other signatories to the FLI’s letter include Luke Muehlhauser, executive director of Machine Intelligence Research Institute, Frank Wilczek, professor of physics at the Massachusetts Institute of Technology and a Nobel laureate, and the entrepreneurs behind artificial intelligence companies DeepMind and Vicarious, as well as several employees at Google, IBM and Microsoft.
这封FLI公开信上的其他联署人还包括机器智能研究所(Machine Intelligence Research Institute)的执行主任吕克•米尔豪泽(Luke Muehlhauser),麻省理工学院(MIT)物理学教授、诺贝尔奖得主弗兰克•维尔切克(Frank Wilczek),人工智能企业DeepMind和Vicarious的幕后主管,以及几名谷歌(Google)、IBM和微软(Microsoft)的员工。
Rather than fear-mongering, the letter is careful to highlight both the positive and negative effects of artificial intelligence.
这封信并不以一封兜售恐惧心理为目的。与此相反,它十分谨慎地同时强调了人工智能的积极面和消极面。
“There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase,” the letter reads. “The potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable.”
信中写道:“如今存在的一个广泛共识是,人工智能研究正在稳步进展之中,它对社会的影响也很可能会逐渐增大。人类文明所能提供的一切都是人类智慧的结晶。这种智慧被人工智能可能提供的工具放大后,我们能做到什么是我们无法想象的,不过那样的话根除疾病和贫困将不再是遥不可及的。从这个意义上说,人工智能有巨大的潜在好处。”
Benefits from artificial intelligence research that are already coming into use include speech and image recognition, and self-driving vehicles. Some in Silicon Valley have estimated that more than 150 start-ups are working on artificial intelligence today.
目前,人工智能研究的部分好处已经成为现实,其中包括语音识别和图像识别,以及自动驾驶的汽车。在硅谷,部分人估计如今从事人工智能业务的初创企业超过了150家。
As the field draws in more investment and entrepreneurs and companies such as Google eye huge rewards from creating computers that can think for themselves, the FLI warns that greater focus on the social ramifications would be “timely”, drawing not only on computer science but economics, law and IT security.
人工智能正吸引越来越多的投资,许多创业家和谷歌等企业都在盼望着能通过建立会自主思考的电脑,获得巨额回报。对于这种局面,FLI警告说,人们或许应“及时”将更多注意力集中在人工智能的社会后果上,不仅要从计算机科学的角度开展研究,还要从经济、法律及信息安全的角度开展研究。