Social media companies step up battle against militant propaganda
San Francisco - Facebook, Google and Twitter are stepping up efforts to combat online propaganda by Islamic militants but are doing it quietly to avoid the perception they are helping authorities police the web.
Facebook said it took down a profile believed to belong to Tashfeen Malik, who, with her husband, is suspected of killing 14 people in San Bernardino, California, in what the FBI is investigating as an act of terrorism.
The European Commission recently demanded internet companies take faster action on what it called “online terrorism incitement and hate speech”.
The internet companies describe their policies as straightforward: They ban certain types of content in accordance with their own terms of service and require court orders to remove or block anything else. Anyone can report content for review and possible removal.
But former employees said Facebook, Google and Twitter worry that if they are public about their true level of cooperation with Western law enforcement agencies, they will face endless demands for similar action from other countries.
They also fret consumers will see them as tools of the US government. Worse, if the companies spell out how their screening works, they run the risk that technologically savvy militants could beat their systems.
Facebook, Google and Twitter say they do not treat government complaints differently from citizen complaints, unless the government obtains a court order. But there are workarounds, former employees, activists and government officials said.
One is for officials to complain that a threat, hate speech or celebration of violence violates the company’s terms of service, rather than any law. Such content can be taken down within hours or minutes and without the paper trail of a court order.
In the San Bernardino case, Facebook said it took down Malik’s profile for violating its community standards, which prohibit praise or promotion of “acts of terror”.
Some activists also report success getting social media sites to remove content. A French-speaking activist using the Twitter alias NageAnon said he helped take down thousands of YouTube videos by spreading links of clear policy violations and enlisting volunteers to report them.
A person familiar with YouTube’s operations said it tends to quickly review videos that generate a high number of complaints relative to the number of views.
What law enforcement, politicians and activists would really like is for internet companies to stop banned content from being shared in the first place but that poses a tremendous technological challenge, as well as an enormous policy shift, former executives said.
Some child pornography can be blocked because technology companies have access to a database that identifies previously known images. But there is no database of violent videos and the same footage that might violate a social network’s terms of service if uploaded by an anonymous militant might pass as part of a news broadcast.
Former White House’s deputy chief technology officer Nicole Wong said tech companies would be reluctant to create a database of jihadist videos for fear that repressive governments would demand similar programmes to screen content they do not like.
“Technology companies are rightfully cautious because they are global players and if they build it for one purpose they don’t get to say it can’t be used for anything else,” said Wong, also a former Twitter and Google legal executive. “It will also be used in China to stop dissidents.”
Twitter revised its abuse policy to ban indirect threats of violence, in addition to direct threats, and improved its speed of handling abuse requests, a spokesman said.
“Across the board we respond to requests more quickly and it’s safe to say government requests are in that bunch,” the spokesman said.
Facebook said it has banned content praising terrorists.
Google’s YouTube has expanded a “Trusted Flagger” programme, allowing groups ranging from British anti-terror police to the Simon Wiesenthal Center, a human rights organisation, to flag videos and get immediate action.
A Google spokeswoman said the vast majority of trusted flaggers were individuals chosen based on previous accuracy in identifying content that violated YouTube’s policies. No US government agencies were part of the programme, though some non-profit US entities have joined in the past year, she said.