Social media companies step up battle against militant propaganda

Friday 11/12/2015
French Prime Minister Manuel Valls (4th L) addresses major internet and social networking actors on the fight against terrorism in Paris, on December 3rd.

San Francisco - Facebook, Google and Twit­ter are stepping up efforts to combat online propa­ganda by Islamic militants but are doing it quietly to avoid the perception they are help­ing authorities police the web.
Facebook said it took down a pro­file believed to belong to Tashfeen Malik, who, with her husband, is suspected of killing 14 people in San Bernardino, California, in what the FBI is investigating as an act of terrorism.
The European Commission re­cently demanded internet compa­nies take faster action on what it called “online terrorism incitement and hate speech”.
The internet companies describe their policies as straightforward: They ban certain types of content in accordance with their own terms of service and require court orders to remove or block anything else. Anyone can report content for re­view and possible removal.
But former employees said Fa­cebook, Google and Twitter worry that if they are public about their true level of cooperation with West­ern law enforcement agencies, they will face endless demands for simi­lar action from other countries.
They also fret consumers will see them as tools of the US govern­ment. Worse, if the companies spell out how their screening works, they run the risk that technologi­cally savvy militants could beat their systems.
Facebook, Google and Twitter say they do not treat government complaints differently from citizen complaints, unless the government obtains a court order. But there are workarounds, former employees, activists and government officials said.
One is for officials to complain that a threat, hate speech or cel­ebration of violence violates the company’s terms of service, rather than any law. Such content can be taken down within hours or min­utes and without the paper trail of a court order.
In the San Bernardino case, Fa­cebook said it took down Malik’s profile for violating its community standards, which prohibit praise or promotion of “acts of terror”.
Some activists also report success getting social media sites to remove content. A French-speaking activist using the Twitter alias NageAnon said he helped take down thou­sands of YouTube videos by spread­ing links of clear policy violations and enlisting volunteers to report them.
A person familiar with YouTube’s operations said it tends to quickly review videos that generate a high number of complaints relative to the number of views.
What law enforcement, politi­cians and activists would really like is for internet companies to stop banned content from being shared in the first place but that poses a tremendous technological challenge, as well as an enormous policy shift, former executives said.
Some child pornography can be blocked because technology com­panies have access to a database that identifies previously known images. But there is no database of violent videos and the same foot­age that might violate a social net­work’s terms of service if uploaded by an anonymous militant might pass as part of a news broadcast.
Former White House’s deputy chief technology officer Nicole Wong said tech companies would be reluctant to create a database of jihadist videos for fear that repres­sive governments would demand similar programmes to screen con­tent they do not like.
“Technology companies are rightfully cautious because they are global players and if they build it for one purpose they don’t get to say it can’t be used for anything else,” said Wong, also a former Twitter and Google legal executive. “It will also be used in China to stop dissi­dents.”
Twitter revised its abuse policy to ban indirect threats of violence, in addition to direct threats, and im­proved its speed of handling abuse requests, a spokesman said.
“Across the board we respond to requests more quickly and it’s safe to say government requests are in that bunch,” the spokesman said.
Facebook said it has banned con­tent praising terrorists.
Google’s YouTube has expanded a “Trusted Flagger” programme, allowing groups ranging from Brit­ish anti-terror police to the Simon Wiesenthal Center, a human rights organisation, to flag videos and get immediate action.
A Google spokeswoman said the vast majority of trusted flaggers were individuals chosen based on previous accuracy in identifying content that violated YouTube’s policies. No US government agen­cies were part of the programme, though some non-profit US entities have joined in the past year, she said.