(单词翻译:单击)
听力文本
This is Scientific American's 60-second Science, I'm Christopher Intagliata.
Social media platforms like Facebook use a combination of artificial intelligence and human moderators to scout out and eliminate hate speech. But now researchers have developed a new AI tool that wouldn't just scrub hate speech but would actually craft responses to it, like this: "The language used is highly offensive. All ethnicities and social groups deserve tolerance."
"And this type of intervention response can hopefully short-circuit the hate cycles that we often get in these types of forums."
Anna Bethke, a data scientist at Intel. The idea, she says, is to fight hate speech with more speech—an approach advocated by the ACLU and the U.N. High Commissioner for Human Rights.
So with her colleagues at U.C. Santa Barbara, Bethke got access to more than 5,000 conversations from the site Reddit and nearly 12,000 more from Gab—a social media site where many users banned by Twitter tend to resurface.
The researchers had real people craft sample responses to the hate speech in those Reddit and Gab conversations. Then they let natural-language-processing algorithms learn from the real human responses and craft their own, such as: "I don't think using words that are sexist in nature contribute to a productive conversation."
Which sounds pretty good. But the machines also spit out slightly head-scratching responses like this one: "This is not allowed and un time to treat people by their skin color."
And when the scientists asked human reviewers to blindly choose between human responses and machine responses—well, most of the time, the humans won. The team published the results on the site Arxiv and will present them next month in Hong Kong at the Conference on Empirical Methods in Natural Language Processing.
Ultimately, Bethke says, the idea is to spark more conversation.
"And not just to have this discussion between a person and a bot but to start to elicit the conversations within the communities themselves—between the people that might be being harmful and those they're potentially harming."
In other words, to bring back good ol' civil discourse?
"Oh! I don't know if I'd go that far. But it sort of sounds like that's what I just proposed, huh?"
Thanks for listening for Scientific American's 60-second Science. I'm Christopher Intagliata.
参考译文
这里是科学美国人——60秒科学系列,我是克里斯托弗·因塔格里塔
脸谱网等社交媒体平台会结合使用人工智能和人工版主来侦查和删除仇恨言论 。但现在,研究人员开发出一种新型人工智能工具,它不仅能清除仇恨言论,还能对该言论做出回复,比如:“这种语言使用非常无礼 。所有种族和社会群体都应该得到宽容 。”
“这种介入回复有望阻断我们在这类论坛中经常看到的仇恨循环 。”
英特尔公司的数据科学家安娜·贝斯克说到 。她表示,这一想法旨在用更多言论来对抗仇恨言论,这是美国公民自由联盟(简称ACLU)和联合国人权事务高级专员所倡导的方法 。
因此,贝斯克和她在加州大学圣巴巴分校的同事从Reddit网站上获取了5000多条对话,并从Gab网站上获得了近1.2万条对话,Gab网站是许多被推特屏蔽的用户喜欢用的网站 。
研究人员让真人对Reddit和Gab对话中的仇恨言论编写样本回复 。然后,他们让自然语言处理算法学习真人回复,并创作自已的回复,比如:“我认为使用本质上带有性别歧视的词语无助于形成有效对话 。”
这听起来相当不错 。但机器也会作出令人有些费解的回复,比如:“凭肤色待人是不允许的,也是不合时宜的 。”
当科学家要求真人审查员在人工回复和机器回复中进行盲选时……嗯,大多数时候都是真人获胜 。研究团队将研究结果发表在Arxiv网站上,该结果还将于下月在香港举行的“自然语言处理经验方法会议”上发表 。
贝斯克表示,这个想法的最终目的是激发更多对话 。
“不仅是人与机器之间的对话,还要开始引出可能受伤害者和可能伤人者的群体间对话 。”
换句话说,目的是唤回良好的公民对话?
“哦!我不知道是否能走那么远 。但这听起来就像是我的打算,哈?”
谢谢大家收听科学美国人——60秒科学 。我是克里斯托弗·因塔利亚塔 。
译文为可可英语翻译,未经授权请勿转载!
重点讲解
重点讲解:
1. scout out 搜索;侦察(地形);勘察;
We went ahead to scout out the lie of the land.
我们先走一步,去侦察地形 。
2. in nature 性质上;本质上;
The rise of a major power is both economic and military in nature.
一个大国的崛起究其实质包括经济和军事两个方面 。
3. contribute to (为…)做贡献;
I am sure that this meeting will contribute to the reinforcement of peace and security all over the world.
我相信这次会议将会促进世界范围内的和平与安全 。
4. spit out (咬牙切齿地)愤愤说出;
He spat out 'I don't like the way he looks at me.'
他愤愤地说:“我讨厌他那样看着我 。”