Google usually uses search history to direct personalised ads to users. But those searching for words related to violent extremism, or who watch Islamic State propaganda videos on Youtube, could be directed towards counter-extremist messages.
“What they are doing is essentially just very specifically targeting people who are searching for specific search terms. They’re using the same kind of technology just to place ads against extremism rather than just general ads,” Bjørn Ihler of the Omelas organisation, which works to counter extremism, told Swedish Radio.
One example of a counter-extremist video people might be directed to shows people in endless queues for buying food in the city of Raqqa in Syria, which the Islamic State has declared its own capital.
In early December, Microsoft, Facebook, Twitter and Youtube announced that they would start cooperating in labelling extremist content. For instance, if a video has been removed by one of the service providers, the others will be notified and can decide whether they too want to remove the content.
In August this year Twitter announced it had closed down 360,000 accounts spreading extremist message. Facebook continuously closes down accounts with extremist content, and cooperates with organisations working against violent extremism.
Anne Kaun, associate professor at the Södertörn University specialising in social media practises, stressed that social media platforms need to become more open about what content they label as extremist.
“If platforms like Facebook and Twitter have such a big impact on society in our conceptions of extremism, and as infrastructure that perhaps contributes to extremist organisations, then society also needs insight into how they decide what is taken down and what gets shown in social media,” Kaun told Radio Sweden.