“We’re not having enough conversations today about how technology is designed, developed and deployed that also consider its dangers, and how it can deepen existing inequalities.”

– Joy Buolamwini, Founder, Algorithmic Justice League

Coders develop, write, and test essential programs and applications that we use everyday. Those who work at the intersection of tech and social justice—like MIT researcher Joy Buolamwini and hacker Matt Mitchell — use their expertise to serve the greater public good. We also call them public interest technologists.


Public interest tech is about all of us. To thrive, it needs the talent and dedication of people, organizations, and funders.

Which One Are You?

Transcript

Joy Buolamwini: This is a mask that I wore as a master’s student at the MIT Media Lab when I was working on an art project. This art project would detect a face in a mirror and then display something like an animal, or somebody who inspired me. So, this project I called the Aspire Mirror. And, as I was working on it, I realized that the software I was using had a hard time detecting my face. So, when I put on the white mask, instead of struggling to be detected, it was flawless. And that was the moment I decided to start looking into why it was my face couldn’t be detected but this white mask could.

[Joy Boulamwini, Founder of the Algorithmic Justice League. A Black person wearing a bright pink blazer and matching glasses. Joy holds a white faceless mask in one hand and a white shield with the letters AJL printed on the front in the other hand. Joy smiles and raises the shield in the air like a champion.]

My name is Joy Buolamwini and I’m a public interest technologist. I started the Algorithmic Justice League when I realized we were entering the age of automation overconfident and underprepared. I look at the ways in which computers learn to detect, classify, and identify faces. I wanted to build tools to help researchers and practitioners code in a more inclusive way and also think through what they were developing with the full-spectrum mindset— from the design, development, and deployment of their system. So, for example, one of our latest projects is Gender Shades. It provided quantitative data about the failure rates of facial analysis technology from some of the leading tech companies on actually classifying the gender of somebody’s face. And what we found in that study was these systems work better on male faces than female faces. They also work better on lighter skin than darker skin. And if you broke it down into different subgroups, lighter males had the best performance.

[A chart of gender classifiers shows light skinned males are most accurately analyzed by Microsoft, Face Plus Plus, and IBM facial recognition technology, compared to light skinned females, dark skinned males, and dark skinned females, who had the lowest successful facial recognition results.]

No company we tested had an error rate of over one percent. Whereas, if you looked at darker-skinned females, you had error rates as high as 35 percent. So, to have a tech company failing on one in three women of color, or one in three women with faces like mine, that’s very concerning.

[Graphics showing the failure rates of facial analysis technology when testing people of color. Oprah Winfrey appears to be male 76.5% of the time. Serena Williams, seen as a gendered male. Michelle Obama, described as a young man wearing a black shirt, a hair piece, or a toupee. Ida B. Wells, as wearing a coonskin hat.]

If you’re looking at most of the world, you’re looking at women and people of color. So, if you’re narrowly focusing technology to serve a very homogeneous group, and the technology is being developed by a homogeneous group, you’re actually missing the majority of the world.

One thing I’ve been really excited about with the Gender Shades research is that it’s now being referenced by others to justify the need for more inclusive technology, and also the need for more rigorous standards. One of the areas in which the Gender Shades research has been used is an open letter to Axon’s AI ethics board. Axon is a company that provides police departments with body-worn cameras. And they’re considering putting facial recognition technology on the body-worn cameras. The Gender Shades research was used in an open letter, stipulating specific steps that would need to be taken to make sure this technology isn’t abused, and also stipulating areas it shouldn’t be used in at all.

So, there aren’t easy solutions, but there are definitely practices that can be involved in the design, development, and deployment of these technologies. I cannot just be a white mask and a shield on my own. We need everybody to add your perspectives and to add your voice, so we can create a world where technology works well for all of us, not just some of us, and centers social change.

[This is tech at work for the public! Hashtag Public Interest Tech. Ford Foundation dot org forward slash tech. Ford Foundation logo: a globe made up of a series of small, varied circles.]

Accessibility Statement

  • All videos produced by the Ford Foundation since 2020 include captions and downloadable transcripts. For videos where visuals require additional understanding, we offer audio-described versions.
  • We are continuing to make videos produced prior to 2020 accessible.
  • Videos from third-party sources (those not produced by the Ford Foundation) may not have captions, accessible transcripts, or audio descriptions.
  • To improve accessibility beyond our site, we’ve created a free video accessibility WordPress plug-in.

“We need everybody to add your perspectives, to add your voice so that we can develop technologies that can work for all of us and not just some of us.”

– Joy Buolamwini, Founder, Algorithmic Justice League

As artificial intelligence transforms our daily lives and powers our world, it’s important to stop and ask ourselves: Do these technologies benefit all of us? Or just some of us?