(单词翻译:单击)
Hello, I'm Joy, a poet of code, on a mission to stop an unseen force that's rising,
大家好,我是乔伊,一个代码诗人,身负着阻止一股不可见的、正在觉醒的力量的重任,
a force that I called "the coded gaze," my term for algorithmic bias.
我把这股力量称为“代码歧视”,这是我对算法偏见的术语。
Algorithmic bias, like human bias, results in unfairness.
算法偏见,就像人类的偏见一样,会造成不公平的问题。
However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace.
但是在算法中,比如病毒可以大规模、迅速地传播。
Algorithmic bias can also lead to exclusionary experiences and discriminatory practices. Let me show you what I mean.
算法偏见也会导致一种排斥性行为以及歧视行为。让我向你们解释一下我的意思。
Hi, camera. I've got a face. Can you see my face? No-glasses face? You can see her face.
嗨,摄像机。这是我的脸。你能看到我的脸吗?不戴眼镜的脸?你能看到她的脸。
What about my face? I've got a mask. Can you see my mask?
那我的脸呢?我有一个面具。你能看到我的面具吗?
So how did this happen? Why am I sitting in front of a computer in a white mask, trying to be detected by a cheap webcam?
那么这是如何发生的呢?为什么我坐在一台电脑前面,带着一个白色的面具,试图通过一台廉价的网络摄影机来识别呢?
Well, when I'm not fighting the coded gaze as a poet of code, I'm a graduate student at the MIT Media Lab,
当我在麻省理工媒体实验室读研时,我还不是一个对抗代码歧视的诗人,
and there I have the opportunity to work on all sorts of whimsical projects, including the Aspire Mirror,
在那里,我有机会从事所有新奇的工程,包括Aspire Mirror,
a project I did so I could project digital masks onto my reflection.
这项工程就是把数字面具投射到我镜中的影像。
So in the morning, if I wanted to feel powerful, I could put on a lion.
因此在早上,如果我想感到充满力量,我可以把狮子投射于镜子中。
If I wanted to be uplifted, I might have a quote.
如果我想变得振奋,我就会投射一句名句。
So I used generic facial recognition software to build the system, but found it was really hard to test it unless I wore a white mask.
所以我使用通用的面部识别软件来打造系统,但是发现测试起来非常困难,除非我戴着一个白色面具。
Unfortunately, I've run into this issue before.
不幸的是,我之前也遇到过这样的问题。
When I was an undergraduate at Georgia Tech studying computer science,
当我在佐治亚理工学院读研学习计算机科学的时候,
I used to work on social robots, and one of my tasks was to get a robot to play peek-a-boo,
我曾经研究社交机器人,我的一个任务就是让一个机器人玩儿躲猫猫,
a simple turn-taking game where partners cover their face and then uncover it saying, "Peek-a-boo!"
这是一个简单的游戏,参与者遮住自己的脸,然后揭开并说道:“躲猫猫”!
The problem is, peek-a-boo doesn't really work if I can't see you, and my robot couldn't see me.
问题是,如果我看不到你,那么躲猫猫就玩儿不了,而我的机器人就是不能看到我。
But I borrowed my roommate's face to get the project done, submitted the assignment,
但是我借了我室友的脸,让该项目可以继续下去,提交了任务,
and figured, you know what, somebody else will solve this problem.
然后思考,其他人能否解决这个问题。
Not too long after, I was in Hong Kong for an entrepreneurship competition.
过了没多久,我在香港参加一场创业竞赛。
The organizers decided to take participants on a tour of local start-ups.
组织方决定让参与者进行初创公司比赛。
One of the start-ups had a social robot, and they decided to do a demo.
其中一个初创企业有一个社交机器人,他们决定做一个演示。
The demo worked on everybody until it got to me, and you can probably guess it.
对每个人都做了演示,直到到了我面前,结果可想而知。
It couldn't detect my face. I asked the developers what was going on,
它不能识别我的脸。我问研发者出了什么事情,
and it turned out we had used the same generic facial recognition software.
结果发现我们使用的是同一款通用面部识别软件。
Halfway around the world, I learned that algorithmic bias can travel as quickly as it takes to download some files off of the internet.
在地球两端,我发现算法偏见可以传播得就像你从网上下载一个文件一样快。
So what's going on? Why isn't my face being detected?
发生了什么事情?为什么我的脸不能被识别?
Well, we have to look at how we give machines sight.
我们必须要审视一下我们是如何赋予机器视觉能力的。
Computer vision uses machine learning techniques to do facial recognition.
计算机视觉应用了机器学习技术,来进行面部识别。
So how this works is, you create a training set with examples of faces.
因此它的操作原理是,你要挑选一组训练集来建立脸部模型。
This is a face. This is a face. This is not a face. And over time, you can teach a computer how to recognize other faces.
这是一张脸。这是一张脸。这不是一张脸。之后,你就教会了计算机如何识别其他的脸。
However, if the training sets aren't really that diverse,
但是,如果这个训练集不是那么的多样化,
any face that deviates too much from the established norm will be harder to detect, which is what was happening to me.
那么任何与模型想去甚远的脸,都会更难识别,这就是发生在我身上的事情。
But don't worry -- there's some good news.
但是不要担心,有一些好消息。
Training sets don't just materialize out of nowhere. We actually can create them.
训练集并非凭空产生。我们实际上能创造它们。
So there's an opportunity to create full-spectrum training sets that reflect a richer portrait of humanity.
因此,我们有机会创建全面的训练集,涵盖了丰富的人物肖像。
Now you've seen in my examples how social robots was how I found out about exclusion with algorithmic bias.
现在你已经在我的例子中看到了我是如何发现社交机器人中的算法偏见的。
But algorithmic bias can also lead to discriminatory practices.
但是算法偏见也造成了歧视行为。
Across the US, police departments are starting to use facial recognition software in their crime-fighting arsenal.
在美国,警方正开始使用面部识别软件打击武器犯罪。
Georgetown Law published a report showing that one in two adults in the US
Georgetown Law公布了一份报告,在美国,每两个成年人就有一个,
that's 117 million people -- have their faces in facial recognition networks.
也就是1.17亿人,在面部识别网络中有他们的脸部信息。
Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy.
警方现在可以查看这些未经管理的网络,使用那些还未经审核的算法。
Yet we know facial recognition is not fail proof, and labeling faces consistently remains a challenge.
但是我们知道,面部识别并不是万无一失的,而且面部标记仍是一项挑战。
You might have seen this on Facebook. My friends and I laugh all the time when we see other people mislabeled in our photos.
你可能在脸书上看到过这个。当我的朋友和我看见其他人在我们的照片中被错误标识时,我们就会笑个不停。
But misidentifying a suspected criminal is no laughing matter, nor is breaching civil liberties.
但是错误标识一个嫌疑犯就不是一件好笑的事情了,而且也侵犯了公民自由。
Machine learning is being used for facial recognition, but it's also extending beyond the realm of computer vision.
机器学习被应用于面部识别,但是其超越了计算机视觉的领域。
In her book, "Weapons of Math Destruction," data scientist Cathy O'Neil talks about the rising new WMDs
在数据科学家凯西·奥尼尔的《数学杀伤性武器》一书中,她谈论了一个新的WMDs,
widespread, mysterious and destructive algorithms that are increasingly being used to make decisions that impact more aspects of our lives.
即广泛传播的、不可知的、摧毁性的算法,这种算法正越来越多地被用于做出影响我们生活的各个方面的决定。
So who gets hired or fired? Do you get that loan? Do you get insurance?
那么谁被录用了,谁又被解雇了呢?你能否得到贷款或保险?
Are you admitted into the college you wanted to get into?
你能否进入你想去的大学?
Do you and I pay the same price for the same product purchased on the same platform?
你和我能在同一平台上以统一价格购买到相同的产品吗?
Law enforcement is also starting to use machine learning for predictive policing.
执法机构也开始使用机器学习进行预测警务。
Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison.
一些法官通过机器生成的犯罪率,判定一个人要在监狱中服刑多长时间。
So we really have to think about these decisions. Are they fair?
因此我们真的要考虑一下这些决定。它们公平吗?
And we've seen that algorithmic bias doesn't necessarily always lead to fair outcomes.
我们已经看到了算法偏见不会总是带来公平的结果。
So what can we do about it? Well, we can start thinking about how we create more inclusive code and employ inclusive coding practices.
因此我们能做些什么呢?我们可以开始思考如何编写更为全面的代码以及采用更全面的编码惯例。
It really starts with people. So who codes matters.
这需要从人开始。也就是谁来编写代码。
Are we creating full-spectrum teams with diverse individuals who can check each other's blind spots?
我们是否创建了一个全面的、涵盖了不同人的数据库,能检查出对方盲区?
On the technical side, how we code matters. Are we factoring in fairness as we're developing systems?
在技术层面,我们如何编码。我们在创建系统时是否把公平考虑了进去?
And finally, why we code matters. We've used tools of computational creation to unlock immense wealth.
最后,我们为什么编码。我们利用计算创作工具,打开了巨大的财富之门。
We now have the opportunity to unlock even greater equality if we make social change a priority and not an afterthought.
如果我们能把社会变革作为一项优先事项而不是事后考虑,那么我们现在就有机会解决更大的平等问题。
And so these are the three tenets that will make up the "incoding" movement.
这些组成了所谓的“译码行动”的三条原则。
Who codes matters, how we code matters and why we code matters.
谁来编写代码,我们如何编码以及我们为什么编码。
So to go towards incoding, we can start thinking about building platforms
因此在译码行动中,我们可以开始思考建造一个平台,
that can identify bias by collecting people's experiences like the ones I shared, but also auditing existing software.
该平台能够通过收集人们的经验,就像我所分享的,来识别偏见,并且审核一些现存的软件。
We can also start to create more inclusive training sets.
我们也可以开始创建更全面的训练集。
Imagine a "Selfies for Inclusion" campaign where you and I can help developers test and create more inclusive training sets.
想象一场“收集自拍照”的运动,在该运动汇总,你和我可以帮助开发者测试并创建更为全面的训练集。
And we can also start thinking more conscientiously about the social impact of the technology that we're developing.
此外,我们还可以开始更认真地思考我们正在研发的基数的社会影响。
To get the incoding movement started, I've launched the Algorithmic Justice League,
为了启动这场译码行动,我已经建立了算法公平联盟,
where anyone who cares about fairness can help fight the coded gaze.
在联盟里,任何在乎公平问题的人都能共同对抗编码歧视。
On codedgaze.com, you can report bias, request audits, become a tester and join the ongoing conversation, #codedgaze.
在codedgaze.com上,你可以报告偏见、申请审核,成为一名测试者,并且加入正在进行的论坛,codedgaze。
So I invite you to join me in creating a world where technology works for all of us, not just some of us,
因此,我邀请你们加入我,共同创造一个科技服务于我们每一个人、而不是我们中的一些人的世界,
a world where we value inclusion and center social change. Thank you.
共同创造一个重视全面性以及以社会变革为中心的世界。谢谢。
But I have one question: Will you join me in the fight?
但是我有一个问题:在这场对抗中,你们会加入我吗?