第511期: 应该害怕聊天机器人吗? Should we fear chatbots?
日期:2023-08-04 15:42

(单词翻译:单击)

~;%ux)La#Q5sW_Q1ZoVx](krSjVl,fKqp

Hello. This is 6 Minute English from BBC Learning English. I’m Neil. And I’m Rob.

L%!UKXsYl|O];t

你好c2h8|&A-cUx=tjnD。这里是BBC英语六分钟v1d4tV(opE~。我是尼尔dtsfKAcw[7JL。我是罗伯E-X!x^cy,g@U671

S!8H2zfkaAjp

Now, I’m sure most of us have interacted with a chatbot.

JOBbyj^,;e

现在,我相信我们大多数人都和聊天机器人有过互动*gXgJ+!.6e[cXBQh

ibsa9Wq!DxoE&DZ|-

These are bits of computer technology that respond to text with text or respond to your voice.

m|I[yToREnPT2*%t;+6n

这是一种计算机技术,可以用文本回应文本或回应你的声音u,#fbies1(x8(;

IV,4IC@Fw~!tpR@,|

You ask it a question and it usually comes up with an answer!

Dj6ck_;O[[TeKi

你问它一个问题,它通常会给出一个答案!

Jlx25_y^U@arK9o1LI6

Yes, it’s almost like talking to another human, but of course it’s not – it’s just a clever piece of technology.

@,Vx)L74cp_VLqXPjb

是的,这几乎就像和另一个人说话,但当然不是——这只是一项智能的技术6UcPO+PT8G&%,D3u(1e5

+-KlESJFk8L,[K

It is becoming more sophisticated – more advanced and complex, but could they replace real human interaction altogether?

nzluCxUv9lV&[=,r)

它正变得越来越精密——越来越先进和复杂,但它们能完全取代真正的人类互动吗?

]awKU_%@D@S4C|CXr=a

We’ll discuss that more in a moment and find out if chatbots really think for themselves.

qjn36d,5F*

我们稍后会详细讨论这个问题,看看聊天机器人是否真的能独立思考wiFx8_E|)A3

w,t|Sl!BYKa@37kZyf[A

But first I have a question for you, Rob.

0XIf]Gjwug#sJ~2

但首先我有个问题要问你,罗伯MSeFo(N%Vx@eAd3B

HCJ8MsaTN)9#~Ip)m(6

The first computer program that allowed some kind of plausible conversation between humans and machines was invented in 1966, but what was it called?

vpKZGt960a[

第一个允许人与机器之间进行某种貌似合理的对话的计算机程序是在1966年发明的,但它叫什么?

MAZf@))wY;fEGEu@t

Was it: a) ALEXA b) ELIZA c) PARRY.

|Y~HEm[Ms1WjSp

是a) 亚莉克莎 b) 伊丽莎 c) 帕里7NYp9bw2Qt]7w

xFCmu(2s5G(I[R]rO

It’s not Alexa – that’s too new – so I’ll guess c) PARRY.

9bau3XW4A|V~P0l9v

不是亚莉克莎,太新了,所以我猜是c) 帕里T5QfsF@dgg|uQ0x

;w,2_*K#FCba9jljO

I’ll reveal the answer at the end of the programme.

;rsyn5e+dA2I8

我将在节目的最后揭晓答案BEVf#%rg^;J

=61*+LRy4#Ic3%vQ0t,B

Now, the old chatbots of the 1960s and 70s were quite basic, but more recently, the technology is able to predict the next word that is likely to be used in a sentence, and it learns words and sentence structures.

lG|eTxghEfdKg2E!S+

好了,20世纪60年代和70年代的旧聊天机器人非常基础,但最近,这项技术能够预测句子中可能使用的下一个单词,它可以学习单词和句子结构5=fY1^+srWGH_C_JL

BC;EmtbwX0(FvuCK4)~

It’s clever stuff.

u|y-dzy=9r~K

这是很智能的东西R3#siNcJ#,GtV

Y678gD%qrqWV@mZR31

I’ve experienced using them when talking to my bank - or when I have problems trying to book a ticket on a website.

.z|c)v8;X(

当我和银行通话时,或者当我在网站上订票遇到问题时,我就使用过它们p]ppXjS5=eL*QDxi

HMNOU%8Fay^&aN0

I no longer phone a human but I speak to a ‘virtual assistant’ instead.

kb[mapjnePucsiV[^oE

我不再给人打电话,而是和一个“虚拟助理”说话5lNwDgJ3.IJsjbRbA;

0L1k1^8azg1Ur[|

Probably the most well-known chatbot at the moment is ChatGTP.

ggZv^aIetj6eHJU

目前最有名的聊天机器人可能是ChatGTPTP_O#ECz,)R!+_uqwL)

bU]cG,m=enQGdN.u#

It is. The claim is it’s able to answer anything you ask it.

ZP)n%iG)x5z8VL@NXp

是的Kx,+R0L@KIRjeJ。它能回答你的任何问题;4rx2h6#b,eBuZr#pZ!

1lZn]mBEl^o1L+)(

This includes writing students’ essays.

9vOGA4NcOwZsWqs;7P

包括代写学生的论文w]^fBu&VIwJG

s8tN+RNELDtqMZ-9f,W

This is something that was discussed on the BBC Radio 4 programme, Word of Mouth.

N98=%yn@F+R(2

这是在BBC广播4频道的《口碑》节目中讨论过的事情]-B9.lk,zVte3c

C-S7ykJ@Kb

Emily M Bender, Professor of Computational Linguistics at the University of Washington, explained why it’s dangerous to always trust what a chatbot is telling us…

FkXD|676d2[MpS9

华盛顿大学计算机语言学教授艾米丽·本德解释了为什么总相信聊天机器人告诉我们的东西是危险的……

q|jrye1ENk5v

We tend to react to grammatical fluent coherent seeming text as authoritative and reliable and valuable - and we need to be on guard against that, because what's coming out of ChatGTP is none of that.

g+qAQ%xW[C[_sCh1KAq

我们倾向于把语法流畅连贯的文本看作是权威的、可靠的、有价值的——我们需要警惕这种情况,因为从ChatGTP中得到的文本并不是这样@-Ye^Jum+h;COH.

US|(SBB+F*YW~6d

So, Professor Bender says that well written text that is coherent – that means it’s clear, carefully considered and sensible – makes us think what we are reading is reliable and authoritative.

)K0~3.SuC[v4qW

所以,本德教授说,写得好的连贯文本——意味着它是清晰的,仔细考虑过和明智的——让我们认为正在阅读的东西是可靠权威的z+GH+!!]gD

1uG)4;q|DdVf-~tq

So it is respected, accurate and important sounding.

+Aw|z6r]=W-lzFm%2

因此,它是受人尊敬的、准确的和重要的vGx)orNDcd(y

*.C%CPFSJm2

Yes, chatbots might appear to write in this way, but really, they are just predicting one word after another, based on what they have learnt.

_-qp[~uhSq-NMu0(ea.

是的,聊天机器人可能看起来是这样,但实际上,它们只是基于学到的知识,一个接一个地预测单词6wKT+JYsCk[

aB&M8skROm

We should, therefore, be on guard – be careful and alert about the accuracy of what we are being told.

Gh(&9PhqoHk]0P1

因此,我们应该保持警惕——对我们被告知的信息准确性保持谨慎和警惕2SnicINS=D|m8A

%b,&lzT(H20ePvWcNL,V

One concern is that chatbots – a form of artificial intelligence – work a bit like a human brain in the way it can learn and process information.

f2FXmx*]y^VK

有一个担忧是这样的,聊天机器人——人工智能的一种形式——在学习和处理信息的方式上有点像人类大脑ko*stk_4dB!G-3

I&jhU~j%=(gS2#j1EB

They are able to learn from experience - something called deep learning.

G|+@u~U#Mvtv

他们能够从经验中学习——这就是所谓的深度学习q47Nmib)[7s*tA

fOb[vuUqj],kUilfUBWR

A cognitive psychologist and computer scientist called Geoffrey Hinton, recently said he feared that chatbots could soon overtake the level of information that a human brain holds.

E2bf%XKd0FtYQk

认知心理学家和计算机科学家杰弗里·辛顿最近表示,他担心聊天机器人可能很快就会超过人类大脑的信息量@ULP4=[=,5x~K

2=Ak&zFHS~X

That’s a bit scary isn’t it?

iz.M-1Y0^CU.M

这有点可怕,不是吗?

sq^Nz*I[8vpN

For now, chatbots can be useful for practical information, but sometimes we start to believe they are human, and we interact with them in a human-like way.

;uQpM6of.p-9aw@a.;Dp

目前,聊天机器人可以提供实用的信息,但有时我们开始相信它们是人类,于是以一种类似人与人的方式与它们互动@,S&5m9mx,dEr

7ZQPPY1-r+_WJhoOGE

This can make us believe them even more.

&yW|^E7%CJ0lQ)_l

这可以让我们更加相信它们(!nQC)mF%c

Qlz;erUYI,s6C|]

Professor Emma Bender, speaking on the BBC’s Word of Mouth programme, explains why we meet feel like that…

T(We=GQFQ_bF%w1ASzy

艾玛·本德教授在BBC的“口碑”节目中解释了为什么我们会有这样的感觉……

~9Lp8-uK]mBKbeY7UvL

I think what's going on there is the kinds of answers you get depend on the questions you put in, because it's doing likely next word, likely next word, and so if as the human interacting with the machine you start asking it questions about

nX&9a&I@l;Q37

我认为你得到的答案取决于你输入的问题,因为它回答的下一个单词似乎合适,下一个单词似乎也合适,所以如果人类与机器互动时开始问它这些问题

rdcWY5.gPaut

‘how do you feel, you know, Chatbot?’

5Z@v4FibQ6Uy

“你感觉怎么样,聊天机器人?”

S6sYnU_AxQEw&

‘What do you think of this?’

GhDXmiFSm2[

“你觉得这个怎么样?”

R=97m~bPEzqx)X

And‘what are your goals?’

ta[DQ9;uuf

和“你的目标是什么? ”

3C]+[I),PbZ~!kg8R6%

You can provoke it to say things that sound like what a sentient entity would say...

z-M+LU[=67^8IW2yl

你可以刺激它说一些听起来像有知觉的实体会说的话……

x[2cR#w*[5+VZsPrlh3

We are really primed to imagine a mind behind language whenever we encounter language.

d8h,f~Vge;8

每当我们遇到语言时,我们就会开始想象语言背后的思想#g(K!gsZ(f%ao7

(0XF_at@emD-P0m[

And so, we really have to account for that when we're making decisions about these.

o3EwkDM1e#fb[

所以,我们在做决定的时候必须考虑到这一点mk4Ok]vHH*x%th|

..EF]P)U#beX~n1ROo

So, although a chatbot might sound human, we really just ask it things to get a reaction – we provoke it – and it answers only with words it’s learned to use before, not because it has come up with a clever answer.

0V-Mn=~7a_|]_|vy*

所以,尽管聊天机器人可能听起来像人类,但我们向它提问真的只是为了得到一个反应——激怒它——它只会用以前学过的词来回答,而不是因为它想出了一个巧妙的答案1dSK-(+UW0F*xLvU^y

nI^yWJf1J*

But it does sound like a sentient entity – sentient describes a living thing that experiences feelings.

*IYIDq&y8_J1(A_^

但它听起来确实像一个有知觉的实体——有知觉是用来形容有情感的生物7pTO*H2S]iPVj

rT*OhS4+hj+~7IOPmd9

As Professor Bender says, we imagine that when something speaks there is a mind behind it.

.GU]6U~GNLs-kgJB

正如本德教授所说,我们以为说话的东西背后有思想t%XiXy=*+@kFd=rLjx

Mv7^p&_(BHbOPYNczEi

But sorry, Neil, they are not your friend, they are just machines!

lYX*8S8nhJb

但不好意思,尼尔,它们不是你的朋友,它们只是机器!

=hPl5I0F7k0V.X(yi7

It’s strange then that we sometimes give chatbots names.

D^lq_-yfb,r=+JK

奇怪的是,我们有时会给聊天机器人起名字+2ZB4ysl44F^xpJQ4;L

O0oafJk]J;Z;

Alexa, Siri… and earlier I asked you what the name was for the first ever chatbot.

>S@.KWy~*x

亚莉克莎,Siri……还有之前我问你的第一个聊天机器人的名字[ubvH7W0qa2jc!Vff4B

@rStGmD%wSu

And I guessed it was PARRY. Was I right?

J+Op]#bxC#fCu]]bZ

我猜是帕里8|5FmNbqKXhB。我说的对吗?

jIP__bev|L4qi

You guessed wrong, I’m afraid.

k&_bP5J9(ZKjRCuw

恐怕你猜错了t*OzKY~UZ.

O=vG*GVjCmn#E[4r#f

PARRY was an early form of chatbot from 1972, but the correct answer was ELIZA.

(Z6WEQ=b9pH(-DjBq~cK

帕里是1972年的早期聊天机器人,但正确的答案是伊丽莎GV3WYD-^A,rXMC4cgkO

y*3+Y7kImhw2

It was considered to be the first ‘chatterbot’ – as it was called then, and was developed by Joseph Weizenbaum at Massachusetts Institute of Technology.

9F_5Q2ue)oR0Z

它被认为是第一个“聊天机器人”,由麻省理工学院的约瑟夫·魏岑鲍姆开发OB_42Q%Ww%I+F

)]I9qwbkbiETEW7F.f2]

Fascinating stuff.

S+N&@]E]RC!J

迷人的东西Pj5J6EIwha)ln6G]

[[y2aKjPkX

OK, now let’s recap some of the vocabulary we highlighted in this programme.

qjqFliY+bs~-j

好了,现在让我们回顾一下今天节目中强调的一些词汇atldv],8n@Qhn

&]XNAZEqdFBX

Starting with sophisticated which can describe technology that is advanced and complex.

7;DyoWZ%,;I]0[S5a_

从精密的开始,它可以描述先进和复杂的技术&HlwMZ=.;6)i

3QEDv,%JElnjrlW~JzF

Something that is coherent is clear, carefully considered and sensible.

xD_pM#,R-]I|H3

有条理的意思是是清晰的,经过深思熟虑的和明智的-Ykn(OI3.A17Y

+2kCFFZZ*2CNL]

Authoritative – so it is respected, accurate and important sounding.

-pL*PeRN^LE

权威的——所以它的意思是受人尊敬的,准确的,重要的W=7+@Neeg-0g_f6aI1

I2w,-yjx-YdTMe

When you are on guard you must be careful and alert about something – it could be accuracy of what you see or hear, or just being aware of the dangers around you.

fZ4oUt)H,5f

当你处于警戒状态时,你必须对某些事情保持谨慎和警惕——它可能是你所看到或听到的东西的准确性,也可能只是意识到你周围的危险#oXx*~xn!D3Kg;Q0~ko

[~6kp8Qo36Hz.mNT

To provoke means to do something that causes a reaction from someone.

!^5H(shhhZgl-!cV*w2

挑衅是指为了引起某人的反应而做的某件事Bj5RbkXm@XbM3)1)Pxcw

3p;8*x8a1~,@L3pAuHSM

Sentient describes something that experiences feelings – so it’s something that is living.

((hP1_Dw-yzjw

有知觉的描述的是有感觉的东西,所以它是有生命的东西e|8cNg_*F*oY_|[cTc=

8#Oh%vgN;j_xOA6

Once again, our six minutes are up. Goodbye. Bye for now.

W+6-J6*3C!B_

六分钟又到了nIzdIp@=Bf&Qc+。再见ZaWNQJ_uC(lPIxK|V0。再见了)RVRNd*z#.tNYE

sa)kEr0~|SeHO*z0R5qBzH5fT&8VUG%rLb@2S
分享到