"人工智能教父"公开谈论AI技术的危险
日期:2023-05-09 14:56

(单词翻译:单击)

NdHvAerHQ|u|@7w11tLGy-Pkb^#~

听力文本

7Wi]noEwySHnkq#r

A man widely considered the "godfather" of artificial intelligence (AI) says he quit his job at Google to speak freely about the dangers of the technology.

%s7v(FqD~BpKi_Q=a

一位被广泛认为是人工智能“教父”的人表示,他辞去了在谷歌的工作,以畅所欲言地谈论这项技术的危险Pi_Rix(50&XXDO.|

iz3O(Vcb|C=of~Q

Geoffrey Hinton recently spoke to The New York Times and other press about his experiences at Google, and his wider concerns about AI development.

cE@C,I-n9dfZ9

杰弗里·辛顿最近接受了《纽约时报》和其他媒体的采访,讲述了他在谷歌的经历,以及他对人工智能发展的更广泛担忧n%vkA2(VkB5

nqZq-O28~,JVm

He told the Times he left the search engine company last month after leading the Google Research team in Toronto, Canada for 10 years.

C+3Ox%8vt2TCe)hnfF

他告诉《纽约时报》,在加拿大多伦多领导谷歌研究团队10年后,他上个月离开了这家搜索引擎公司BODUHb_-1;4&V!qp~nC

=^IJ*Pp^f29=D2;9

During his career, the 75-year-old Hinton has pioneered work on deep learning and neural networks.

@f^chEOwt&TJrn

在他的职业生涯中,75岁的辛顿在深度学习和神经网络方面开创了先河8*pH)dj0Pn

d^41Bt&Ru=M(|K|

A neural network is a computer processing system built to act like the human brain.

tchFtW0Z7|yPnW

神经网络是一种像人脑一样运作的计算机处理系统8]B!yMUtMRxyseYS=2a.

AIdB1GnR]jk

Hinton's work helped form the base for much of the AI technology in use today.

f^WLe(#ThF#;L~wh49

辛顿的工作为如今使用的许多人工智能技术奠定了基础;pR*C[;8d=e

EQD=Q1jo]CIRS

In 2019, Hinton and three other computer scientists received the Turing Award for their separate work related to neural networks.

tkIh5izJ3wb@@

2019年,辛顿和其他三名计算机科学家因各自在神经网络方面的工作而获得了图灵奖w|D,Q*Kok.

^]TqIdledFC6,

The award has been described as the "Nobel Prize of Computing."

WRv7]+n=cX

该奖项被称为“计算机领域的诺贝尔奖”zYou~zu)xTGkcwikK

eaR|+.@%T9s&uH1

The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.

17(3#Fx)xrdsD[(R1FN

另外两位获奖者约书亚·本吉奥和杨立昆也表达了对人工智能未来的担忧zPJv9I2=0pg

FwYeE82)O2ZP9NR]05~G

In recent months, a number of new AI technologies have been introduced.

+)(%XG%jaYYGD_

近几个月来,一些新的人工智能技术相继推出|P*l1NUuD85

K+[f=P%4jB

Microsoft-backed American startup OpenAI launched its latest AI model, ChatGPT-4, in March.

sGV)_OZ!yDnjWx,A-Ct=

微软支持的美国初创公司OpenAI在3月份推出了其最新的人工智能模型ChatGPT-4sgwKA;Y[MnlV&nR=M*5J

m^iiB4lUFg

Other technology companies have invested in computing tools, including Google's Bard system.

2]++FSU-b,

其他科技公司也投资了计算工具,包括谷歌的Bard系统]^DEG*,O_,|9vHJ

*wcoqPRlR-8

Such tools are known as "chatbots."

cI+yjyOmAFft2*5|.3a

这种工具被称为“聊天机器人”lz-JVPv,mqKav

znbZ_#2KX0B)Ruo[#_d

The recently released AI tools have demonstrated the ability to perform human-like discussions and create complex documents based on short, written commands.

ysNAgSwfVH4a

最近推出的人工智能工具已经展示了进行类似人类的讨论和基于简短的书面命令生成复杂文档的能力Nm=7!B-Z6&X

e)X,]oV1~LSa)J-@Weg

Speaking to the BBC, Hinton called the dangers of such tools "quite scary."

@QlA4BcKEm

在接受BBC采访时,辛顿称这种工具的危险性“相当可怕”=9KEA^-vvP

Tc0R4yt5e=Xp^m

He added, "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon will be."

N.y#5hAqSD!%

他补充说,“目前,据我所知,它们并不比我们聪明~Nn8exCp2Y。但我认为它们很快就会变得比我们聪明的9GW3l@).^%8v。”

m7z=iJNm;|

He said he believes AI systems are getting smarter because of the massive amounts of data they take in and examine.

b_5BXqVqpaaur

他说,他相信人工智能系统将会变得越来越智能,因为它们可以接收和检查大量数据Ihk+M)+sDhASP

U.5B#9Y)HT97nv5k

Hinton also told MIT Technology Review he fears some "bad" individuals might use AI in ways that could seriously harm society.

gc~R)7tgwDr~%_A%|

辛顿还告诉《麻省理工技术评论》,他担心一些“坏人”可能会以严重危害社会的方式使用人工智能;QSiZe=!AYQzJBG-~5b1

N5QDMqMTO;Y(T-2zM[G

Such effects could include AI systems interfering in elections or inciting violence.

8_yySjr04n2RsSa=.XON

这种影响可能包括人工智能系统干预选举或煽动暴力X[]7u0H3YA1v0iand_X]

tBunC#[tp^Bik[]Eh

He told the Times he thinks AI systems could create a world in which people will "not be able to know what is true anymore."

HE3f*Uq@T4

他在接受《纽约时报》采访时表示,他认为人工智能系统可能会创造出一个世界,在这个世界里,人们将“再也无法知道什么是真的”cB1Tv!YH@WlSOs4X

mY~Y95h]-Kh~3A

Hinton said he retired from Google so that he could speak openly about the possible risks of the technology as someone who no longer works for the company.

_rF0r3OtiC

辛顿说,他从谷歌辞职,这样他就可以作为一个不再为该公司工作的人公开谈论这项技术可能存在的风险6Cq6(zo,XC[AuzN^Anh

IkXMS1qzYy

"I want to talk about AI safety issues without having to worry about how it interacts with Google's business," he told MIT Technology Review.

=qyGIeJ,fBpULQX

他告诉《麻省理工技术评论》:“我想谈论人工智能的安全问题,而不必担心它如何与谷歌业务互动2Y.X1SssV7i!。”

.|R4F_!8L!y1

Since announcing his departure, Hinton has said he thinks Google had "acted very responsibly" in its own AI development.

*EBd+,V9IUo...K

自从宣布离职以来,辛顿表示,他认为谷歌在自己的人工智能开发方面“表现得非常负责J*w.bvWOmd,W!#c。”

W~QTyTeqAl_iT_YHx

In March, hundreds of AI experts and industry leaders released an open letter expressing deep concerns about current AI development efforts.

UrfwzX7XW7Xn

今年3月,数百名人工智能专家和行业领袖发表了一封公开信,表达了他们对当前人工智能发展项目的深切担忧1u7Zu%Z3c9

z4s9C3n@DIVWiNAZ2

The letter identified a number of harms that could result from such development.

As^*@I-Gl1jIXTM7@

这封信指出了这种发展可能会造成的一些危害irG!]ue[FJNa^!9IJ]

06(MOJo*dZ

These included increases in propaganda and misinformation, the loss of millions of jobs to machines and the possibility that AI could one day take control of our civilization.

ev4Hh[p~%^2^6KIY

其中包括宣传和错误信息的增加,数百万个工作岗位被机器取代,以及人工智能有朝一日可能会控制我们的文明N_5TSyEyBQGg)(WR~

PGiYM7rgPQ4

The letter urges a halt to development of some kinds of AI.

vQ;!UE,L%YwQ~hVq~A

这封信敦促停止开发某些类型的人工智能^QMukWg3^Y

v|X0I#AZz7@iCzg

Turing Prize winner Bengio, Apple co-founder Steve Wozniak and Elon Musk, leader of SpaceX, Tesla and Twitter signed the letter.

aUhVF|8bWp*4

图灵奖得主本吉奥、苹果联合创始人史蒂夫·沃兹尼亚克以及美国太空探索技术公司、特斯拉和推特的负责人埃隆·马斯克在信上签名WzfPE(n_tVu.iK)8

nIBklL#53I

The organization that released the letter, Future of Life, is financially supported by the Musk Foundation.

0rXM!+^]O4M0.Ul

发布这封信的组织“生命的未来”得到了马斯克基金会的资金支持G*urE;WZU+!FS

.0Z7wnIRI0BNc19mr

Musk has long warned of the possible dangers of AI.

(0YU1sXMCWnXyn*etkC

马斯克早就警告称,人工智能可能存在的危险i8H(MwbEZ*)Z7

]12sOetWAUIp#n.%QWx

Last month, he told Fox News he planned to create his own version of some AI tools released in recent months.

zU&(;T3r+Fi

上个月,他告诉福克斯新闻,他计划为近几个月发布的一些人工智能工具创建自己的版本OdXb29DF7gk~02

FjeayH*)ive3Se&o[jx

Musk said his new AI tool would be called TruthGPT.

-kf]+hi*XBSiV

马斯克表示,他的新人工智能工具将会被称为TruthGPT@uE[_sCoE;5r

t0tg,MMdOlrAKiT;INh

He described it as "truth-seeking AI" that will seek to understand humanity so it is less likely to destroy it.

Fb=Y,SfQ]]Dk1*B.z4

他将其描述为“寻求真相的人工智能”,它将寻求理解人类,从而减少摧毁人类的可能性#ynyL1KN049fjt|tvib

4oDE(Ve&j@5

Alondra Nelson is the former head of the White House Office of Science and Technology Policy, which seeks to create guidelines for the responsible use of AI tools.

nf+3dLi%D0oY0bf9LjkO

阿朗德拉·纳尔逊是白宫科技政策办公室的前负责人,该办公室旨在为负责任地使用人工智能工具制定指导方针W3R6E|JB1]o%qLj]M#

Y0wznjT6nQtCKvQ

She told The Associated Press, "For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn't only include AI experts and developers."

NAPWZu&[J-C

她在接受美联社采访时表示:“无论是好是坏,聊天机器人时代所做的是让人工智能成为一种全国性的话题以及国际性的话题,而不仅仅是人工智能专家和开发人员讨论的话题W&N]7;*zs6L9@N~。”

f7q+gXsD|3IkT]Wjmt

Nelson added that she hopes the recent attention on AI can create "a new conversation about what we want a democratic future and a non-exploitative future with technology to look like."

[%2MJvyA=t_YI.

纳尔逊补充说,她希望最近人们对人工智能的关注可以创造出“一场新的对话,讨论在一个民主和没有剥削的未来我们想要的技术是什么样子的9XyJXp(IPA!H4L3ZyQR。”

Cf@^SZ*g1G

I'm Bryan Lynn.

WZv7bl[d(zI

布莱恩·林恩为您播报OE-|KF6t9^hgIP~

)EQ@ie!nR|7CO

译文为可可英语翻译,未经授权请勿转载!

=P^L7M90eb)M@p^&&-JMpPXLG]z=AW^-@W~u3v9O_.4*
分享到