WCSU

Joy Buolamwini: How Does Facial Recognition Software See Skin Color?

Jan 26, 2018
Originally published on January 29, 2018 1:26 pm

Part 2 of the TED Radio Hour episode Can We Trust The Numbers?

About Joy Buolamwini's TED Talk

Facial analysis technology is often unable to recognize dark skin tones. Joy Buolamwini says this bias can lead to detrimental results — and she urges her colleagues to create more inclusive code.

About Joy Buolamwini

As a "poet of code", computer scientist Joy Buolamwini founded the Algorithmic Justice League to fight inequality in computation.

Her graduate research at the MIT Media Lab focuses on algorithmic and coded bias in Machine Learning.

Buolamwini is a Fulbright Fellow, an Astronaut Scholar, a Rhodes Scholar, and a Google Anita Borg Scholar.

Copyright 2018 NPR. To see more, visit http://www.npr.org/.

GUY RAZ, HOST:

It's the TED Radio Hour from NPR. I'm Guy Raz. And on the show today - Can We Trust The Numbers? - ideas about our growing faith in data, algorithms and statistics to predict outcomes. Joy, normally, we have a criteria for people who are on the show. So we're just making a special exception for you. Normally, you have to be the following to be the show - a Rhodes Scholar, a Fulbright Fellow, an Anita Borg scholar, an astronaut scholar. Plus, you also have to win a Nobel Prize.

JOY BUOLAMWINI: I fell short? I appreciate the exception, though (laughter).

RAZ: This is Joy Buolamwini. And, OK, she might not have a Nobel Prize, but she does have all those other awards. She's also a graduate researcher at the MIT Media Lab.

BUOLAMWINI: And I am the founder of the Algorithmic Justice League. So my personal mission is to fight algorithmic bias.

RAZ: Yes, the Algorithmic Justice League, which is a group of computer scientists and coders who try to raise awareness about the social problems that exist in algorithms. It's something Joy recently demonstrated by using a basic webcam and facial analysis technology. And it's a kind of technology you might find when you upload a picture on social media.

BUOLAMWINI: And - what I do is I sit in front of the camera hoping for my face to be detected. And I have pretty dark skin. So I'm sitting there with my face, dark skin. There's no detection. Then I pull on my friend's face, who has much lighter skin than I do. She's Chinese. And you see that her face is immediately detected.

So then I switch back to my face, dark skinned and gorgeous, not detected. I put on a white mask. And after I put on the white mask, that's when I'm detected. And I wanted to show this as an example that in the same conditions - right? - a typically lit office, we were having a different experience.

RAZ: So facial recognition software, this is like the stuff that, like, how Facebook and Google know who to tag in photos and stuff. You're saying that a lot of the software doesn't detect black faces?

BUOLAMWINI: Absolutely. This is the kind of technology that you're starting to see in things like the iPhone 10 with Face ID...

RAZ: Oh, yeah.

BUOLAMWINI: ...Of course with Facebook with the auto-tagging and so forth. So the - this kind of technology is being built on machine-learning techniques. And machine-learning techniques are based on data. So if you have biased data in the input and it's not addressed, you're going to have biased outcomes.

RAZ: Joy explained more about this from the TED stage.

SOUNDBITE OF TED TALK)

BUOLAMWINI: Unfortunately, I've run into this issue before. When I was an undergraduate at Georgia Tech studying computer science, I used to work on social robots. And one of my tasks was to get a robot to play peek-a-boo. The problem is, peek-a-boo doesn't really work if I can't see you. And my robot couldn't see me.

Not too long after, I was in Hong Kong on a tour of local startups. One of the startups had a social robot. And they decided to do a demo. The demo worked on everybody until it got to me. And you can probably guess it. It couldn't detect my face.

So what's going on? Why isn't my face being detected? Well, we have to look at how we give machines sight. Computer vision uses machine-learning techniques to do facial recognition. So how this works is you create a training set with examples of faces. This is a face. This is a face. This is not a face. And over time, you can teach a computer how to recognize other faces.

However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect, which is what was happening to me.

RAZ: Joy, you gave this talk a couple of years ago, but this problem still exists.

BUOLAMWINI: Yes. And it's even more urgent now because there is this assumption that we've arrived.

RAZ: Yeah.

BUOLAMWINI: So, for example, in 2014, Facebook released a paper called "DeepFace" that showed a major breakthrough for facial recognition technology. They achieved 97.35 percent accuracy on the gold standard benchmark for facial recognition at the time. But we always have to ask with these types of technologies, with AI - who's included? Who's excluded? So now I just told you 97.35 percent accuracy.

RAZ: Sounds great.

BUOLAMWINI: Guess what the gender ratio was?

RAZ: I don't know, 50/50?

BUOLAMWINI: It's a gold standard. You would think 50/50.

RAZ: Yeah.

BUOLAMWINI: It was 77.5 percent male.

RAZ: Wow.

BUOLAMWINI: And then demographic breakdown was - I want to say 80.5 percent white for this gold standard benchmark.

RAZ: Wow.

BUOLAMWINI: So now when you know that the gold standard has these skews, when you see something like 97.35 percent accuracy, we've made a major breakthrough, you start to get a better understanding of exactly which faces - right? - this breakthrough applies to and which ones might not be included. And it's a reflection of people who are in positions of power to mold artificial intelligence. And that's a very limited group right now.

(SOUNDBITE OF TED TALK)

BUOLAMWINI: Across the U.S., police departments are starting to use facial recognition software in their crime-fighting arsenal. Georgetown Law published a report showing that 1 in 2 adults in the U.S. - that's 117 million people - have their faces in facial recognition networks. Police departments can currently look at these networks unregulated using algorithms that have not been audited for accuracy. Yet, we know facial recognition is not fail-proof. And labeling faces consistently remains a challenge. You might have seen this on Facebook. My friends and I laugh all the time when we see other people mislabeled in our photos.

But misidentifying a suspected criminal is no laughing matter, nor is breaching civil liberties. Law enforcement is also starting to use machine-learning for predictive policing. Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison. So we really have to think about these decisions. Are they fair? And we've seen that algorithmic bias doesn't necessarily always lead to fair outcomes.

RAZ: So how do you stop this? I mean, how do you fight algorithmic bias?

BUOLAMWINI: So I feel like the minimum thing we can do is actually check for the performance of these systems across groups that we already know have historically been disenfranchised - right? - in the first place. I feel like that's a minimal thing. Then we also need to think about what are steps to take to address the bias as well, right? You know, this is why I think it's critically important we have diverse people participating in the creation of the future. And that means having diverse people shaping the priorities as well as developing the technology.

RAZ: So here's what I wonder. Can - I mean, is it possible to create a completely unbiased algorithm?

BUOLAMWINI: It depends on what the task is. But I think the question I think about more so is knowing that we are deeply biased, even in our language, our classification systems, et cetera. How do we create systems that work well for humanity? But, also, how do we keep ourselves honest as we're making progress, right?

So it's like if you're talking about perfecting a democracy. Will there be a perfect democracy? From what I look and see, probably not, you know. But you're trying to create a more perfect union in some way. So in trying to create more perfect AI, you strive for these ideals of inclusion. You want to mitigate bias, et cetera.

But we also have the humility to know that being fallible, being human and being humans who embed our fallibility into the machines we create, we're not necessarily going to be perfect all the time. But we have to try to do our best and continue to improve.

And if we exclude people and we limit people's humanity - right? - which is what happens when we have the algorithmic bias that's not addressed, we really limit the potential for all of us in the long run.

RAZ: Joy Boulamwini. She's a computer scientist and the founder of the Algorithmic Justice League. You can see her full talk at ted.com. Transcript provided by NPR, Copyright NPR.