Pressured in Europe, Facebook details removal of terrorism content
By Julia Fioretti
(Reuters) - Facebook Inc on Thursday offered additional insight on its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.
Facebook has ramped up use of artificial intelligence such as image matching and language understanding to identify and remove content quickly, Monika Bickert, Facebook's director of global policy management, and Brian Fishman, counter-terrorism policy manager, explained in a blog post.
The world's largest social media network, with 1.9 billion users, Facebook has not always been so open about its operations, and its statement was met with skepticism by some who have criticized U.S. technology companies for moving slowly.
"We've known that extremist groups have been weaponizing the internet for years," said Hany Farid, a Dartmouth College computer scientist who studies ways to stem extremist material online.
"So why, for years, have they been understaffing their moderation? Why, for years, have they been behind on innovation?" Farid asked. He called Facebook's statement a public relations move in response to European governments.
Britain's interior ministry welcomed Facebook's efforts but said technology companies needed to go further.
"This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place," a ministry spokesman said on Thursday.
Germany, France and Britain, countries where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other providers of social media such as Google and Twitter to do more to remove militant content and hate speech. Continued...