Facebook says it’s been trying to do a better job of finding and pulling down terrorist content, and on Thursday the world’s largest social network said it’s seen signs of success.
Facebook said that in the third quarter, it pulled down 3 million posts related to terrorism, a drop from the 9.4 million posts Facebook removed in the second quarter. The median amount of time terrorist content stayed on the platform after users reported it also dropped, from 43 hours in theto 18 hours in the third quarter, the company said.
Social networks are under more pressure to pull down terrorist content before violence spills into the real world. But as they beef up their efforts, these companies say that bad actors are constantly changing their strategy to evade detection. Some terrorists try to create new accounts or break up their messages, the tech firm said.
“We can reduce the presence of terrorism on mainstream social platforms, but eliminating it completely requires addressing the people and organizations that generate this material in the real-world,” wrote Monika Bickert, Global Head of Policy Management, and Brian Fishman, Head of Counterterrorism Policy in a blog post.
Facebook relies on machine learning to detect terrorist content that their reviewers should prioritize. Sometimes the company will automatically pull down posts if the system determines that there’s “high confidence” that the post contains support for terrorism. It’s also been expanding some of its tools to more languages.
In the third quarter, about 99 percent of content related to ISIS and al-Qaeda was pulled down by the tech firm before a user reported it.
Infowars and Silicon Valley: Everything you need to know about the tech industry’s free speech debate.
Cambridge Analytica: Everything you need to know about Facebook’s data mining scandal.