科技是如何应对极端主义和在线骚扰的
日期:2018-08-05 17:11

(单词翻译:单击)

 MP3点击下载

My relationship with the internet reminds me of the setup to a clichéd horror movie.
我与互联网的关系,让我想起老套恐怖片的情境。
You know, the blissfully happy family moves in to their perfect new home, excited about their perfect future,
幸福快乐的家庭,搬进一间美好的新房子,兴奋期待完美的未来,
and it's sunny outside and the birds are chirping...
外头阳光普照,鸟儿在唱歌...
And then it gets dark. And there are noises from the attic.
接着电影就变黑暗了。阁楼传出噪音。
And we realize that that perfect new house isn't so perfect.
我们发现,那间美好的新房子并没有那么美好。
When I started working at Google in 2006, Facebook was just a two-year-old, and Twitter hadn't yet been born.
2006年,当我开始在谷歌工作时,脸书才刚推出两年,推特甚至还没问世。
And I was in absolute awe of the internet and all of its promise to make us closer and smarter and more free.
我对互联网及它所有的承诺感到绝对的敬畏,它承诺要让我们更靠近且更聪明,还有给予更多自由。
But as we were doing the inspiring work of building search engines and video-sharing sites and social networks,
但当我们开始进行这鼓舞人心的工作,建立搜寻引擎,建立影片分享网站和社交网络,
criminals, dictators and terrorists were figuring out how to use those same platforms against us.
罪犯、独裁者及恐怖分子都在想办法如何用同样的平台来对抗我们。
And we didn't have the foresight to stop them.
我们没有先见之明来阻止他们。
Over the last few years, geopolitical forces have come online to wreak havoc.
在过去几年,地缘政治学的势力也上网展开大破坏。
And in response, Google supported a few colleagues and me to set up a new group called Jigsaw,
造成的反应是,谷歌支持我和几位同事,成立一个小组,叫做“Jigsaw”,
with a mandate to make people safer from threats like violent extremism, censorship, persecution
我们的使命是要让大家更安全,避免受到像是极端主义、审查制度、迫害的威胁,
threats that feel very personal to me because I was born in Iran, and I left in the aftermath of a violent revolution.
我个人对这些威胁特别有感,因为我是在伊朗出生的,在一场暴力革命的余波中我被迫离开了伊朗。
But I've come to realize that even if we had all of the resources of all of the technology companies in the world,
但我渐渐了解到,即使我们有所有的资源,有全世界所有的科技公司,
we'd still fail if we overlooked one critical ingredient: the human experiences of the victims and perpetrators of those threats.
如果我们忽略了一项关键因素,我们仍然会失败:那些威胁的受害者与加害者的人类经历。
There are many challenges I could talk to you about today.
今天我其实可以与各位谈很多的挑战。
I'm going to focus on just two. The first is terrorism.
但我只打算着重两项。第一是恐怖主义。
So in order to understand the radicalization process, we met with dozens of former members of violent extremist groups.
为了要了解激进化的过程,我们和数十名暴力极端主义团体的前成员见面。
One was a British schoolgirl, who had been taken off of a plane at London Heathrow as she was trying to make her way to Syria to join ISIS.
其中一位是英国的女学生,她曾在伦敦希斯洛机场被强拉下飞机,因为当时她打算去叙利亚加入伊斯兰国。
And she was 13 years old. So I sat down with her and her father, and I said, "Why?"
她当时只有十三岁。我和她及她父亲坐下来谈,我说:“为什么?”
And she said, "I was looking at pictures of what life is like in Syria,
她说:“我在看一些叙利亚生活写照的图片,
and I thought I was going to go and live in the Islamic Disney World."
我以为我是要去住到伊斯兰的迪斯尼乐园。”
That's what she saw in ISIS.
这是她看到的伊斯兰国。
She thought she'd meet and marry a jihadi Brad Pitt and go shopping in the mall all day and live happily ever after.
她以为她要去见一位名叫布莱德彼得的圣战士,并嫁给他,整天都能去购物,从此幸福快乐地生活。
ISIS understands what drives people, and they carefully craft a message for each audience.
伊斯兰国知道什么能驱使人,他们会为每一位观众精心策划一则信息。
Just look at how many languages they translate their marketing material into.
光是去看看他们把他们的营销素材翻译成多少语言,就能了解了。
They make pamphlets, radio shows and videos in not just English and Arabic,
他们会制作小册子、广播节目和影片,不只用英语和阿拉伯语,
but German, Russian, French, Turkish, Kurdish, Hebrew, Mandarin Chinese.
还有德语、俄语、法语、土耳其语、库德语、希伯来语和汉语。
I've even seen an ISIS-produced video in sign language. Just think about that for a second:
我甚至看过一支伊斯兰国制作的影片是用手语的。花点时间思考一下:
ISIS took the time and made the effort to ensure their message is reaching the deaf and hard of hearing.
伊斯兰国投入时间和心力,来确保他们的信息能够传达给听障人士。
It's actually not tech-savviness that is the reason why ISIS wins hearts and minds.
伊斯兰国能赢得人心和人信,并不是因为他们很精通科技。
It's their insight into the prejudices, the vulnerabilities, the desires of the people they're trying to reach that does that.
因为他们有洞见,了解他们试图接触的人有什么偏见、脆弱、欲望,才能做到这一点。
That's why it's not enough for the online platforms to focus on removing recruiting material.
那就说明了为什么在线平台只把焦点放在移除召募素材是远远不够的。
If we want to have a shot at building meaningful technology that's going to counter radicalization,
如果我想要有机会建立一种有意义的技术,用它来对抗极端化,
we have to start with the human journey at its core.
我们就得要从它核心的人类旅程开始着手。
So we went to Iraq to speak to young men who'd bought into ISIS's promise of heroism and righteousness,
所以,我们去了伊拉克,去和年轻人交谈,我们找的对象曾相信伊斯兰国所做的关于英雄主义与公正的承诺,
who'd taken up arms to fight for them and then who'd defected after they witnessed the brutality of ISIS's rule.
曾拿起武器为他们作战,接着,在目击了伊斯兰国统治的残酷之后选择变节。
And I'm sitting there in this makeshift prison in the north of Iraq
我坐在伊拉克北部的一间临时监狱里,
with this 23-year-old who had actually trained as a suicide bomber before defecting.
会见一位在变节前受过训练的自杀炸弹客,年仅23岁。
And he says, "I arrived in Syria full of hope, and immediately,
他说:“我到叙利亚时满怀着希望,可一下子
I had two of my prized possessions confiscated: my passport and my mobile phone."
我两项最重要的东西就被没收了:我的护照和我的手机。”
The symbols of his physical and digital liberty were taken away from him on arrival.
在他抵达时,这两样象征他实体自由与数位自由的东西被夺去了。
And then this is the way he described that moment of loss to me.
接着,他这样向我描述迷失的时刻。
He said, "You know in 'Tom and Jerry,' when Jerry wants to escape,
他说:“在《汤姆与杰瑞》里,当杰瑞想要逃脱时,
and then Tom locks the door and swallows the key and you see it bulging out of his throat as it travels down?"
汤姆把门锁住,把钥匙吞掉,还可以从外表形状看到钥匙延着喉咙下滑,记得吗?”
And of course, I really could see the image that he was describing,
当然,我能看见他所描述的画面,
and I really did connect with the feeling that he was trying to convey, which was one of doom, when you know there's no way out.
我真的能和他试图传达的这种感受产生连结,一种在劫难逃的感受,你知道没有路可逃了。
And I was wondering: What, if anything, could have changed his mind the day that he left home?
而我很纳闷:在他离家的那一天,如果有的话,什么能改变他的心意?
So I asked, "If you knew everything that you know now about the suffering and the corruption, the brutality
于是我问:“如果你当时知道你现在知道的这些关于苦难、腐败、残酷的状况,
that day you left home, would you still have gone?"
在离家的那天就知道,你还会选择离开吗?”
And he said, "Yes." And I thought, "Holy crap, he said 'Yes.'"
他说:“会。”我心想:“老天爷,他说会。”
And then he said, "At that point, I was so brainwashed, I wasn't taking in any contradictory information. I couldn't have been swayed."
接着他说:“在那个时间点,我完全被洗脑了,我不会接受任何有所矛盾的信息。我当时不可能被动摇。”
"Well, what if you knew everything that you know now six months before the day that you left?"
“那么如果你在你离开前六个月就已经知道你现在知道的这些,结果会如何?”
"At that point, I think it probably would have changed my mind."
“若在那个时间点,我想我可能会改变心意。”
Radicalization isn't this yes-or-no choice.
极端化并不是关于是非的选择。
It's a process, during which people have questions -- about ideology, religion, the living conditions.
它是一个过程,在这过程中,人们会有问题--关于意识型态、宗教、生活条件的问题。
And they're coming online for answers, which is an opportunity to reach them.
他们会上网寻求答案,这就是一个接触他们的机会。
And there are videos online from people who have answers
有答案的人会在网络提供影片
defectors, for example, telling the story of their journey into and out of violence;
比如,叛逃者诉说他们投入和脱离暴力的心路历程;
stories like the one from that man I met in the Iraqi prison.
就像我在伊拉克监狱见到的那名男子告诉我的故事。
There are locals who've uploaded cell phone footage of what life is really like in the caliphate under ISIS's rule.
有当地人会上传手机影片,呈现在伊斯兰国统治之下穆斯林国的真实生活样貌。
There are clerics who are sharing peaceful interpretations of Islam. But you know what?
有教会圣职人员分享关于伊斯兰的和平诠释。但你们知道吗?
These people don't generally have the marketing prowess of ISIS.
这些人通常都没有伊斯兰国的高超营销本领。
They risk their lives to speak up and confront terrorist propaganda,
他们冒着生命危险说出来,和恐怖主义的宣传对质,
and then they tragically don't reach the people who most need to hear from them.
但很不幸的是,他们无法接触到最需要听到他们声音的人。
And we wanted to see if technology could change that.
我们想看看科技是否能改变这一点。
So in 2016, we partnered with Moonshot CVE to pilot a new approach to countering radicalization called the "Redirect Method."
2016年,我们和Moonshot CVE(信息泄露组织)合作,试验一种对抗极端化的新方法,叫做“重新定向法”。
It uses the power of online advertising
它用在线广告的力量,
to bridge the gap between those susceptible to ISIS's messaging and those credible voices that are debunking that messaging.
在容易受到伊斯兰国信息影响的人与揭发那些信息真面目的可靠声音之间搭起桥梁。
And it works like this: someone looking for extremist material -- say they search for "How do I join ISIS?"
它的运作方式如下:有人在寻找极端主义的素材--比如他们搜寻“如何加入伊斯兰国?”
will see an ad appear that invites them to watch a YouTube video of a cleric, of a defector -- someone who has an authentic answer.
就会有一则广告出现,邀请他们上YouTube,看圣职人员、变节者的影片--有真实答案的人所拍的影片。

科技是如何应对极端主义和在线骚扰的

And that targeting is based not on a profile of who they are,
这个方法锁定目标对象的方式不是依据个人资料,
but of determining something that's directly relevant to their query or question.
而是由与他们的查询或问题有直接相关的东西来决定。
During our eight-week pilot in English and Arabic,
我们用英语和阿拉伯语做了八周的测试,
we reached over 300,000 people who had expressed an interest in or sympathy towards a jihadi group.
接触到了超过三十万人,他们都是对圣战团体表示感兴趣或同情的人。
These people were now watching videos that could prevent them from making devastating choices.
现在这些人在看的影片能预防他们做出毁灭性的选择。
And because violent extremism isn't confined to any one language, religion or ideology,
因为暴力极端主义不局限于任何一种语言、宗教或意识形态,
the Redirect Method is now being deployed globally to protect people being courted online by violent ideologues,
“重新定向法”现已在全球实施,保护大家上网时不会受到暴力意识形态的引诱,
whether they're Islamists, white supremacists or other violent extremists,
不论是伊斯兰教的、白人至上主义的或其他暴力极端主义的,
with the goal of giving them the chance to hear from someone on the other side of that journey;
我们的目标是要给他们机会去听听看在那旅程另一端的人怎么说;
to give them the chance to choose a different path.
给他们机会去选择不同的路。
It turns out that often the bad guys are good at exploiting the internet,
事实证明,坏人通常擅长利用互联网,
not because they're some kind of technological geniuses, but because they understand what makes people tick.
并不是因为他们是什么科技天才,而是因为他们了解人的痒处。
I want to give you a second example: online harassment.
我再举个例子说明:在线骚扰。
Online harassers also work to figure out what will resonate with another human being.
在线骚扰者也在致力于找出什么能让另一个人产生共鸣。
But not to recruit them like ISIS does, but to cause them pain.
但他们的目的不像伊斯兰国是要招募人,而是造成别人痛苦。
Imagine this: you're a woman, you're married, you have a kid.
想象一下这个状况:你是一名女子,已婚,有一个孩子。
You post something on social media, and in a reply, you're told that you'll be raped, that your son will be watching, details of when and where.
你在社交媒体上发了一篇文章,你得到一则回应,说你会被强暴,你的儿子会被监视,还有时间和地点的细节信息。
In fact, your home address is put online for everyone to see.
事实上,在网络上大家都能看到你家的地址。
That feels like a pretty real threat.
那威胁感觉十分真实。
Do you think you'd go home? Do you think you'd continue doing the thing that you were doing?
你认为你会回家吗?你认为你会继续做你正在做的事吗?
Would you continue doing that thing that's irritating your attacker?
你会继续做那件惹恼了攻击你的人的那件事吗?
Online abuse has been this perverse art of figuring out what makes people angry,
在线虐待一直都是种刻意作对的艺术,找出什么能让人生气,
what makes people afraid, what makes people insecure, and then pushing those pressure points until they're silenced.
什么能让人害怕,什么能让人没有安全感,接着去压那些对压力敏感之处,直到它们被压制下来。
When online harassment goes unchecked, free speech is stifled.
当在线骚扰不受管束时,自由言论就会被扼杀。
And even the people hosting the conversation throw up their arms and call it quits,
即使主持对话的人弃械并宣布到此为止,
closing their comment sections and their forums altogether.
把他们的留言区以及论坛都关闭。
That means we're actually losing spaces online to meet and exchange ideas.
那意味着,我们其实正在失去在线可以碰面交换点子的空间。
And where online spaces remain, we descend into echo chambers with people who think just like us.
在还有在线空间的地方,我们会陷入到回音室当中,只和相同想法的人聚在一起。
But that enables the spread of disinformation; that facilitates polarization.
但那会造成错误信息被散布出去;那会促成两极化。
What if technology instead could enable empathy at scale?
但如果反之能用科技来大量产生同理心呢?
This was the question that motivated our partnership with Google's Counter Abuse team, Wikipedia and newspapers like the New York Times.
就是这个问题驱使我们和谷歌的反虐待小组、维基,以及像《纽约时报》这类报纸合作。
We wanted to see if we could build machine-learning models that could understand the emotional impact of language.
我们想要看看我们是否能建立出能够了解语言会带来什么情绪影响的机器学习模型。
Could we predict which comments were likely to make someone else leave the online conversation?
我们能否预测什么样的意见有可能会让另一个人离开在线对话?
And that's no mean feat. That's no trivial accomplishment for AI to be able to do something like that.
那不是容易的事。人工智能要做到那样的事,并不是理所当然。
I mean, just consider these two examples of messages that could have been sent to me last week.
我是指,想想这两个例子,都是在上周我有可能会收到的信息。
"Break a leg at TED!" ... and "I'll break your legs at TED."
“祝在TED顺利!” 以及“我会在TED打断你一条腿。”
You are human, that's why that's an obvious difference to you, even though the words are pretty much the same.
你们是人,那就是为何你们能明显看出,用字几乎相同的两个句子有何差别。
But for AI, it takes some training to teach the models to recognize that difference.
但对人工智能来说,要透过训练来教导模型去辨视那差别。
The beauty of building AI that can tell the difference is that AI can then scale to the size of the online toxicity phenomenon,
建立能够分辨出那差别的人工智能,有个美好之处,就是人工智能能处理在线毒素现象的规模,
and that was our goal in building our technology called Perspective.
为此目的,我们建立了一种出名为“观点”的技术。
With the help of Perspective, the New York Times, for example, has increased spaces online for conversation.
在“观点”的协助下,以《纽约时报》为例,他们增加了在线交谈的空间。
Before our collaboration, they only had comments enabled on just 10 percent of their articles.
在与我们合作之前,他们的文章只有大约10%有开放留言。
With the help of machine learning, they have that number up to 30 percent.
在机器学习的协助下,这个数字提升到了30%。
So they've tripled it, and we're still just getting started.
增加了三倍,且我们才刚开始合作而已。
But this is about way more than just making moderators more efficient.
这绝对不只是让版主更有效率。
Right now I can see you, and I can gauge how what I'm saying is landing with you.
现在,我可以看见你们,我可以估量我所说的话会如何对你们产生影响。
You don't have that opportunity online.
但是在线上没有这样的机会。
Imagine if machine learning could give commenters, as they're typing, real-time feedback about how their words might land,
想象一下,当留言者在打字的时候,如果机器学习能够即使给他们反馈,说明他们的文字会造成什么影响,
just like facial expressions do in a face-to-face conversation.
就像在面对面交谈时面部表情的功能。
Machine learning isn't perfect, and it still makes plenty of mistakes.
机器学习并不完美,它仍然会犯许多错误。
But if we can build technology that understands the emotional impact of language, we can build empathy.
但如果我们能建立出能了解语言有什么情绪影响力的技术,我们就能建立同理心。
That means that we can have dialogue between people with different politics, different worldviews, different values.
那就表示,我们能让两个人对话,即使他们政治立场不同,世界观不同,价值观不同。
And we can reinvigorate the spaces online that most of us have given up on.
我们能让大部分人已经放弃的在线空间再度复兴。
When people use technology to exploit and harm others, they're preying on our human fears and vulnerabilities.
当人们用科技来利用和伤害他人时,他们靠的是我们人类的恐惧和脆弱。
If we ever thought that we could build an internet insulated from the dark side of humanity, we were wrong.
如果我们认为我们能够建立一个完全没有人性黑暗面的互联网,我们就错了。
If we want today to build technology that can overcome the challenges that we face,
如果现今我们想要建立技术来克服我们面临的挑战,
we have to throw our entire selves into understanding the issues
我们就得把自己全心全意投入,并去了解这些议题,
and into building solutions that are as human as the problems they aim to solve.
去建立人类的解决方案,来解决人类的问题。
Let's make that happen. Thank you.
让我们来实现它吧。谢谢。

分享到
重点单词
  • awen. 敬畏,恐惧 vt. 使敬畏或惊惧
  • insecureadj. 不安全的;不稳定的;不牢靠的
  • impactn. 冲击(力), 冲突,影响(力) vt. 挤入,压紧
  • escapev. 逃跑,逃脱,避开 n. 逃跑,逃脱,(逃避)方法、
  • authenticadj. 可信(靠)的,真实的,真正的
  • commentn. 注释,评论; 闲话 v. 注释,评论
  • irritatingadj. 刺激的,使愤怒的,气人的 动词irritate
  • gaugen. 测量标准,轨距,口径,直径,测量仪器 vt. 测量
  • approachn. 接近; 途径,方法 v. 靠近,接近,动手处理
  • queryn. 质问,疑问,疑问号 vt. 质问,对 ... 表示