A banner reading “The Islamic State” is displayed on the Facebook page of a user identifying himself as Nawan Al-Farancsa. The page was still live Tuesday, May 7, 2019, when the screen grab was made. Facebook says it has robust systems in place to remove content from extremist groups, but a sealed whistleblower’s complaint reviewed by the AP says banned content remains on the web and is easy to find.
It wasn’t produced by extremists; it was created by Facebook. In a clever bit of self-promotion, the social media giant takes a year of a user’s content and auto-generates a celebratory video. In this case, the user called himself “Abdel-Rahim Moussa, the Caliphate.”
“Thanks for being here, from Facebook,” the video concludes in a cartoon bubble before flashing the company’s famous “thumbs up.”
Facebook likes to give the impression it’s staying ahead of extremists by taking down their posts, often before users even see them. But a confidential whistleblower’s complaint to the Securities and Exchange Commission obtained by The Associated Press alleges the social media company has exaggerated its success.
Even worse, it shows that the company is inadvertently making use of propaganda by militant groups to auto-generate videos and pages that could be used for networking by extremists.
Zuckerberg did not offer an estimate of how much of total prohibited material is being removed.
The research behind the SEC complaint is aimed at spotlighting glaring flaws in the company’s approach. Last year, researchers began monitoring users who explicitly identified themselves as members of extremist groups. It wasn’t hard to document.
Some of these people even list the extremist groups as their employers. One profile heralded by the black flag of an al-Qaida affiliated group listed his employer, perhaps facetiously, as Facebook.
The profile that included the auto-generated video with the flag burning also had a video of al-Qaida leader Ayman al-Zawahiri urging jihadi groups not to fight among themselves.