A Machine to Fight Hate
Researchers develop software to detect hate speech online.
Jan 17, 2018
Hate speech has become an everyday issue on the Internet. In social networks and the comments sections of many online media sites, discussions are increasingly being disrupted by posts insulting people for their origin, skin color, religion, sexuality, or gender.
In Germany alone, there are now hundreds of people whose main job is working as moderators or dealing with these kinds of hateful posts as community managers. They monitor discussions and are tasked with evaluating posts, acting as intermediaries, taking countermeasures, and, when necessary, deleting comments.
“The volume of data and the sheer speed of digital discourse are so vast that it is difficult to impossible to keep up as an individual,” says Martin Emmer, a professor of communications at Freie Universität Berlin. “During what was referred to as the refugee crisis, in the summer of 2015, so much hate was posted online that it was hardly possible to moderate it all.”
Emmer and his colleague Joachim Trebbe, also a professor, are in charge of the new research project “NOHATE – Overcoming crises in public communication about refugees, migration, foreigners.”
In cooperation with computer scientists from Beuth University of Applied Sciences and tech firm VICO Research & Consulting, the participants in the joint project are developing software that helps community managers do their daily work to stem the tide of digital hate – especially when it comes to hate speech directed at refugees and immigrants. The project is receiving about one million euros in funding from the German Federal Ministry of Education and Research (BMBF).
“We are developing an algorithm that can search social networks, forums, and comments sections and identify hate speech there,” Emmer says. “The program can tell community managers where and when things start to get hairy and intervention may be needed.” Emmer explains that down the line, the algorithm may also be able to act almost predictively, as a kind of early warning system detecting when there is a risk that a discussion will get out of hand. “In those situations, the software could also propose de-escalation measures,” Emmer says. “It could show the moderator which strategies were successful in similar cases.”
Emmer and Alexander Löser, a professor of computer science, had the idea for a program like this at a conference last year. Löser, who works at Beuth University of Applied Sciences, is responsible for the technical side of the project. The centerpiece is an algorithm that not only recognizes the instances of hate speech that people teach it to identify, but also expands on its own knowledge. “The algorithm uses a ‘deep learning’ method to examine nearby words and sentences on its own, constantly learning as it goes along,” says Löser. “When it comes across posts that it classifies as hate speech, it sounds the alarm.”
Löser stresses how important it is to work with social scientists in the project. “In terms of the technical side, having a learning method recognize whether someone in a text likes a certain kind of shoes or uses hate speech is similar. The crucial factor isn’t the algorithm, but the available data.” The only way for the algorithm to learn successfully is to give it as well developed a data set as possible to start out with, Löser explains. “For a realistic view people need to feed the widest possible variety of training data into it first.”
The training data are collected by communications scholars at Freie Universität. “Our first step is to use traditional methods of social research to study how hate speech works in the discourse surrounding migration,” says Sünje Paasch-Colberg, a researcher with a doctorate in communications who is participating in the project as a research associate. “We will be analyzing comments sections, collecting and categorizing hate speech, and studying how the topics develop and form waves,” she explains.
Several cooperation partners – including nongovernmental organizations such as the Amadeu Antonio Foundation, which works extensively on hate speech, and major publishers, like Axel Springer – will provide advice on the project and benefit from the software later on.
“For us as social scientists, developing a tool like this offers an opportunity to do more than just conduct studies, but to have a practical impact on society as well,” Paasch-Colberg says. Plans call for the tech company that is involved in the alliance project to market the software and sell it to publishers and other media companies. The product could be used by companies all over the world in the future.
“Ultimately, it is also supposed to be a program that protects people in their professional lives,” Löser says. He explains that recent studies have shown that having to read hate-filled posts for hours every day can cause psychological strain. “A machine will be able to help a lot in that regard in the future,” he explains.
Still, everyone involved in the project is aware that plans like these require particular ethical vigilance. “The technology we’re working on is a very powerful one,” Emmer says. Löser is careful to point out that the algorithm always needs to be checked by a human. He says, “We’re not programming an automatic deletion machine. Our software will flag posts and make suggestions, but the final decision will always rest with a person.”
- Prof. Dr. Martin Emmer, Institute for Media and Communication Studies, Department of Political and Social Sciences, Freie Universität Berlin, Email: firstname.lastname@example.org