Activists rely on the Internet as a tool and space to build movements. But increasingly, forces that we can’t see are shaping these spaces—like algorithms that govern what rises to the top of social media feeds, companies that constantly track us in order to tailor advertising, or political operatives looking to manipulate public opinion. The Internet is a crowded place, and often gamed in ways that put those advocating for greater openness and justice at risk.

As everyone from advertisers to political adversaries jockey for attention, they are increasingly using automated technologies and processes to raise their own voices or drown out others. In fact, 62 percent of all Internet traffic is made up of programs acting on their own to analyze information, find vulnerabilities, or spread messages. Up to 48 million of Twitter’s 320 million users are bots, or applications that perform automated tasks. Some bots post beautiful art from museum collections, while some spread abuse and misinformation instead. Automation itself isn’t cutting edge, but the prevalence and sophistication of how automated tools interact with users is.

Activists and NGOs, politicians, government agencies, and corporations rely on automated tools to carry out all kinds of tasks and operations: NGOs and activists use bots to automate civic engagementhelping citizens register to vote, contact their elected officials, and elevate marginalized voices and issues—to perform operational tasks like fundraising and developing messaging, and to promote transparency and accountability. But they’re far outshone by the private sector’s use of conversational chatbot interfaces—like Amazon’s Alexa or Apple’s Siri—that use these technologies to make their platforms easier to use, gather data on customers, and increase profits.

Politicians, governments, and organizations sometimes use bots to provide public services, like this educational tool on pregnancy and newborn milestones. But they also use them to manipulate public opinion and disable activists. For example, in Mexico, Peñabots were used to support President Enrique Peña Nieto and silence protests against corruption and violence. Activists and journalists in Turkey, Russia, and Venezuela have faced similar efforts meant to marginalize dissenting opinions on social media. In the US, bot-assisted traffic was used to make stories and misinformation go viral by spreading millions of links to articles on conservative news sites like Breitbart News and InfoWars.

In other cases, bots can be grouped together to create botnets that are used to launch distributed denial of service (DDoS) attacks to bring down activist websites and other online communications systems., a nonprofit tech organization that protects independent media and human rights organizations from these attacks, documented over 400 recorded DDoS attacks aimed at social justice groups in 2016.

There is a huge opportunity for organizations and activists to use automation in constructive ways that further social justice causes, but doing so is not without risk. What follows is a set of questions aimed at helping advocates better understand the challenges and risks that bots and automated activism present.

Do you rely heavily on social media for your communications, outreach, and engagement work?

If social media is a big part of your organization’s strategy, be vigilant about how automated accounts might disrupt your outreach. At the same time, with the increased presence of bot-generated traffic online, be aware that it will be more and more difficult to tell the difference between engagements with actual constituents, versus bot-generated engagement aimed at confusing or deterring your message and activism.

Is your organization at risk of being targeted due to the nature of your organizing and communications work?

Develop internal policies on how to respond to negative and inflammatory comments online and vigilantly guard your organization’s website and communications. Develop a contingency plan for what to do if your website is hit with a big rush of traffic meant to take it down. Report any attacks to nonprofit tech partners such as that can track botnet attacks and provide digital security planning recommendations.

What should bots for social justice look like?

There are many ways that bots can promote social justice and lift the voices and work of minorities. To take one example, in situations where releasing critical information to the public might endanger an activist’s life, a bot could be used to release that information instead. Bots could elevate the stories and narratives of groups often marginalized from mainstream public discourse. As automated activism expands and deepens, we need to identify the broader ethical and legal frameworks to guide how automation is integrated into social justice. This means asking questions like: Where do we draw the line between governments’ and politicians’ strategic communications and propaganda? How can we balance the need for security, privacy, freedom of speech, user protections, and preferences in automated online spaces?

How might innovations in automated activism coexist with traditional forms of organizing, messaging, and movement building?

The Internet has already been found to contribute to “slacktivism,” or half-hearted attempts at engagement. If we continue to automate more aspects of our political and civic engagement, we will need more research to determine how automated technology can increase civic engagement, support traditional forms of offline political engagement, and achieve political and social outcomes—rather than commoditizing that work and making some forms of engagement less impactful. Before building the next call-to-action bot, it’s important for technologists to understand what political organizers need to effectively do their work. Social justice advocates and activists should work with well-intentioned technologists and become key partners in identifying and understanding how technology can be useful to building and sustaining movements.

While automation can be used to lower the costs of collective action for social justice activists and organizations, it can also increase risk. It is relatively easy for bots to tear down organizations’ and activists’ discourse, in contrast to the challenges organizations face in defending themselves against those automated attacks. As long as bots continue to participate—at growing rates—in our public sphere without regulation or transparency, they will pose enormous threats to democracy. Every person’s voice—including those expressed online—should count, but that is threatened when automation is used to impersonate a single individual while amplifying his or her voice by the thousands.

Looking critically at your organization’s online strategies will help mitigate and plan for risks. And remembering that automation cannot replace activism, but only complement it, will go a long way toward ensuring the effective use of automated tools as they continue to develop.