1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites

The rise of political bots

Thomas Baerthlein, LondonAugust 6, 2016

Automated social media accounts - bots - have become part of our political communication. Bot armies can influence online discourse and pick fights with users. But are all bots bad? Thomas Baerthlein reports from London.

https://p.dw.com/p/1Jbyk
Twitter Bot Social Media (photo: picture-alliance/dpa/J.Arriens)
Image: picture-alliance/dpa/J.Arriens

Almost every Twitter user will have come across bots. Your new follower whose profile picture is the generic egg, and who posts exactly once a day vague words of wisdom in 140 characters, is most probably such an automated account.

Very common are spam bots for commercial purposes. Others are more creative: The @MagicRealismBot (with close to 40,000 followers, certainly including a good number of bots) has been programmed to randomly build and tweet sentences every two hours that imitate the writings of magic realist novelists such as Jorge Luis Borges or Gabriel Garcia Marquez.

The rise of bots has thrown up a wide range of new issues. Last year, it was reported that a Dutch web developer was called in for questioning by police after their bot had, in a similarly random manner, put together a statement (tweeted at another bot, incidentally) which an internet detective read as a death threat.

The implications of bots for political discourse on social media are another open question.

Bots highly effective on Twitter

Compared to other platforms like Facebook, bots are most easy to deploy and most effective on Twitter, and it is there that they have been used most commonly for political purposes.

Political bots come in different shapes: Fake followers mainly bolster the number of online supporters for a politician; bot armies "Twitter-bomb" discussions on certain topics in order to marginalize other viewpoints; and more sophisticated versions will react to certain keywords by starting a debate with others.

"Bots often get tangled up in each other's scripts. We've seen them argue with each other. We've seen real people argue with bots," Phil Howard, professor at the Oxford Internet Institute, told DW via e-mail.

For individual users, bots can be tough to spot, according to Howard. There are certain pointers though, such as very unbalanced profiles.

"If one of the users in your network seems to have thousands of followers, but is only following one person, it is probably a bot," he said. "If it is following thousands of users, but has only one follower, it is probably a bot."

Manipulating online debate

Bots have become a tool of choice for dictators and authoritarian regimes with an interest in manipulating online debate. They are similarly employed by many politicians in democracies in order to influence citizens' opinions on controversial issues, or to make a difference in elections.

Mexico and Turkey are two cases where bots have been used especially widely.

"We haven't caught a bot significantly pushing public opinion in a particular direction in a democracy," said Howard, who leads a multi-disciplinary research project on political bots.

"Where they work the best is in sowing confusion or choking off a political conversation on a global issue that involves an authoritarian government. So bots are quite active in Russia, and part of an overall successful regime strategy to muddy issues and sow misinformation."

"At the beginning of the Syrian civil war, when journalists and interested publics outside the country were trying to find out what was going on in the country, the regime used bots to flood the #syria hashtag and kill it as a source of information about the resistance."

Danger to democracy?

Are bots a danger to democracy? Is a future possible where elections will be won by the party with superior bots? Saiph Savage, a computer scientist at West Virginia University, is not too worried.

"I think it is very likely that we as a society will establish norms about what kind of bots it will be ok to deploy - or not," she told DW. "So for instance, if a candidate is suddenly caught with an army of bots, this is something that society will punish."

On the other hand, candidates that use bots in a more sophisticated way to communicate better with the electorate could well be rewarded, she added. "It is more about being able to use social media to enable back-and-forth collaborations between candidates and citizens," Savage said.

'Botivist' to engage people

For her research, Savage has created bots along these lines that help activists mobilize citizens and involve them in political campaigns. Her "botivist" can, for example, crowdsource ideas. "The bot went out and found people that were, let's say, complaining about corruption. And the bot asked them then, how can we fight corruption?"

Even when the accounts were identifiable as bots, citizens did cooperate with them. Savage believes botivists should not pretend to be real people - even if that attracted more feedback, in the long run volunteers "might feel somewhat let down if they are contributing their time to something that they have not realized is automated."

She is convinced that such activist bots have a bright future. "A really cool thing about bots is that you can leverage a large number of people on these social media platforms. So you don't have to wait for people to download your app, you instantly have access to a large pool of people."

After crowdsourcing ideas, a bot could then facilitate a vote by the newly recruited volunteers about the best idea, and finally also help with implementation.

Activists from the Black Lives Matter movement have shown another way of using a bot for mobilizing supporters: the @StayWokeBot tweets a selection of poetic messages about how important certain individuals are for the movement. Many of them actually reply to the bot, with comments like "thank you, you made my day!"