When I was a master’s student at MIT, I worked on a number of different art projects that used facial analysis technology. One in particular—called The Aspire Mirror— would detect my face in a mirror and then display a reflection of something different, based on what inspired me or what I wanted to empathize with. As I was working on it, I realized that the software I was using had a hard time detecting my face. But after I made one adjustment, the software no longer struggled: I put on a white mask.

This disheartening moment brought to mind Franz Fanon’s book Black Skin White Masks, which interrogates the complexities of changing oneself—putting on a mask to fit the norms or expectations of a dominant culture. But what kind of culture of innovation do we have, for it to lead to a situation where my face couldn’t be detected until I masked it in white? As Artificial Intelligence, which powers facial analysis technology, is governing access to opportunities, economic participation, and even personal freedoms, what does this mean for me—and the millions of people like me—who might be adversely impacted by technology without even being aware of it? I had broader concerns, too. What systems enabled the technology that found my face undetectable? What are the larger biases that undergird these technological advances?

My work looking at the social implications of artificial intelligence, and at bias in facial analysis technology, was sparked by that moment—not because it was the first time I encountered the problem, but because I was starting to see increased adoption of the technology without consideration for its dangers.

I started the Algorithmic Justice League because of moments like these. I realized that as we are entering the age of automation, we are overconfident and underprepared, and living with celebrated technologies that were not always developed inclusively or applied ethically. From the beginning, our mission has been to fight the “coded gaze,” my term for algorithmic bias that can lead to social exclusion and discriminatory practices.

I think of the coded gaze as a reflection of the priorities, preferences, and prejudices of those who have the power to shape technology—which is quite a narrow group of people. The Algorithmic Justice League aims to take a different perspective into account, by adding to the conversation those who are often excluded from it, and by highlighting how we can make some of the most commonly used technologies more inclusive.

This is vitally important. Technology pervades so many aspects of our lives, and it would be a serious error—a missed opportunity—if we didn’t apply or adapt it to the public interest and social justice. This is an exciting time, where we’re seeing brilliant minds and products spearheading incredible advancements. But what if those technologies actually propagate bias, or reflect dated beliefs that so much civil rights progress has worked to overturn? These questions can’t be solved by technological solutions alone.

I identify as a POC—a “poet of code”—working to meet these challenges in a multi-disciplinary way. I wish there were more POCs in the tech world. Poets connect with the world on an emotional level; they are often the ones illuminating societal truths that might be hard to see. I try to use a similar approach in the algorithmic research I do. There’s a beauty and a power to it that feels very poetic.

Much like finding work as a poet, working in public interest technology—as exciting and transformative as it can be—can be quite a challenge. Clear career trajectories in this field are not common, and very often demand being scrappy, weaving together the right institutional and financial support, and passion.

What is clear, though, is that this is a field that needs full spectrum inclusion. It needs to embrace a diversity of backgrounds and perspectives. Together, these voices will lead to a world where technology works well for everyone, not just some of us.

Joy Buolamwini founded the Algorithmic Justice League to fight bias in machine learning. She researches social impact technology at the MIT Media Lab.