Springe direkt zu Inhalt

Dangerous Loudspeakers

Social networks are home to an increasing number of bots – computer programs that pretend to be people. They are also used for propaganda purposes.

Jun 17, 2019

Yes to a free Internet, no to bots: Thousands of people demonstrated recently against the upload filters proposed as part of the EU’s copyright law reform initiative, which has been adopted in the meantime.

Yes to a free Internet, no to bots: Thousands of people demonstrated recently against the upload filters proposed as part of the EU’s copyright law reform initiative, which has been adopted in the meantime.
Image Credit: Sebastian Gollnow/dpa

Digitalization has brought change to democratic societies, in some cases radical change. Political parties, media outlets, and researchers are only now gradually forming an understanding of how the political public sphere forms on the Internet, how content is debated, and how electoral decisions are made. At the same time, there is growing concern that digital channels can be used to manipulate public sentiment toward specific aims.

On the eve of the European parliamentary elections, with the future course of the European Union under debate, there was increasing discussion of social bots: computer programs that act like human users on social networks. This software is sometimes used by various interest groups with the goal of influencing the process of forming political opinions. Could bots become a danger to democracy?

“The notion that election results could be directly influenced with bots is certainly exaggerated – at least if you think any party could get five percent more or less as a result,” says Martin Emmer, a professor of media and communication studies at Freie Universität Berlin. Emmer is one of the founding directors of the Weizenbaum Institute for the Networked Society, an interdisciplinary research center that studies current social changes in connection with digitization.

“Bots are used in all major elections these days”

But bots can influence sentiment and opinions in society, says Ulrike Klinger. “Bots are used in all major elections these days,” she says. “They’re an inexpensive way to make topics or people appear to be more popular than they actually are.” Klinger is a junior professor of media and communication studies at Freie Universität. At the Weizenbaum Institute, she is in charge of the “News, Campaigns and the Rationality of Public Discourse” research group. Klinger and her colleague Tobias Keller of the University of Zurich recently published a scholarly paper on the subject of social bots in elections.

Twitter is especially fertile ground for bots, she says. Based on current studies, it should be assumed that anywhere from nine to 15 percent of all accounts on the message platform, which is popular with politicians and journalists, are controlled by algorithms. Some of them are passive bots, which are used to artificially inflate follower and like numbers. Others are active bots, which do more than just like posts and follow accounts. These also share links and participate in discussions. “In many cases, it’s hard to tell that they are computer programs,” Klinger says. More complex bots don’t generally post repetitive content on a massive scale. Just like people, they post sporadically, and their tone changes. They are often programed to propagate or suppress specific opinions.

“Spiral of Silence” Phenomenon

“The massive use of bots gives people the impression that a large number of people hold a certain opinion, although there might actually only be a few,” Klinger says. “So bots can, at least potentially, be used to fake social majorities and sentiments.” As a result, it is dangerous for journalists and politicians to use numbers of likes or followers to gauge the importance of certain topics or people. “How many people follow someone, or how hotly debated a subject is on social media, is often less relevant than many people might think,” Klinger says.

But bots can do more than just fake controversy or popularity. They can also affect the tone used on social networks. “Recent studies have shown that having as little as two to four percent of the participants in a discussion be bots is enough to change the climate of the discussion,” Klinger explains. This has its roots in the “spiral of silence” phenomenon. According to this theory, people are less willing to express their opinions when they believe they are in the minority. “If you can present a certain opinion as the majority opinion, you can silence other opinions,” the scholar says.

Social Bots Artificially Inflate Topics and Steer Opinions

Joachim Trebbe, a professor at the Media Analysis / Research Methods Division of the Institute for Media and Communication Studies at Freie Universität, believes the risk posed by bots is manageable. “I’m optimistic that professional journalists will be able to counteract them by responsibly filtering information,” he says. Institutions like the TV program Tagesschau still enjoy high levels of trust among the public. These institutions will continue to play an important role in sorting through information and attributing credibility.”

Emmer agrees with that view: “Of course, there are people who have completely turned away from traditional media, so they are susceptible to manipulation by bots,” he says. “But the vast majority of people get their information from many different sources, so their contact with bots is only tangential.”

“Don’t count on Facebook and Twitter to somehow do the right thing”

Klinger counters by pointing out just how little is known even now about the influence of bots – in part because platforms like Facebook and Twitter keep most of their data locked up. “We can’t count on the platforms to somehow do the right thing,” she says. “Most of them are publicly traded companies, and their chief obligation is to their shareholders.” And that means they have no interest in seriously fighting bots. “These corporations’ capital is their user figures,” Klinger says. “So deleting bots and fake accounts actually harms their business model.”

She fears the platform operators won’t make a move voluntarily. And that means that for democratic societies, it will be crucial to impose stronger obligations on them by law. “Only if these companies share their data with independent researchers and academia will we be able to tell how bots really work – and who is commissioning the work they do.”

This text originally appeared in German on April 27, 2015, in the Tagesspiegel newspaper supplement published by Freie Universität. 

Further Information