Fighting illicit content online
Communications technology is having a hard time keeping pace with disinformation on the internet, particularly the radicalising propaganda put out by terrorist groups.
Despite strenuous international efforts, jihadist incitement remains difficult to remove from the World Wide Web.
A report by the Counter Extremism Project (CEP) casts doubt on YouTube’s proclaimed ability to expeditiously remove propaganda videos by the barbaric extremist group called the Islamic State (ISIS).
CEP said that “91% of these ISIS videos were uploaded more than once; 24% of terrorist videos included in the study remained online for more than two hours; and 60% of the 278 accounts responsible for uploading the videos remained active after posting content that violated YouTube’s terms of service.”
CEP Executive Director David Ibsen said it was alarming that “despite big tech’s promises of combating online extremism and terrorism, noxious, previously prohibited content continues to persist across all major platforms.”
Ibsen notes that ISIS videos were uploaded 163,000 times in the past three months. “That should be a wake-up call to lawmakers around the world that terror-inciting content remains pervasive and that these companies must do more to remove it once and for all.”
CEP’s findings are more than a wake-up call. They are a reminder of extremist groups’ appeal to marginalised young people. The constituency may be small but that it exists is disturbing.
The fight is not about technological fixes per se but the need for political leaders and civil society to fashion a credible counter-narrative remains crucial.
That said, the battle between truth and falsehood is demonstrably getting harder. Rapid technological change is making it ever more difficult to tamp down the online dissemination of all sorts of suspect material, not just by terrorists but by criminal organisations and unscrupulous individuals.
Experts are warning of the coming peril of Deepfakes, fake videos created by means of artificial intelligence. Deep learning allows technology to copy a person’s voice, speech patterns and facial expressions so perfectly that realistic videos can be produced of people appearing to say things they never really did.
Deepfakes could become powerful disinformation tools. They could whip up a frenzy of fear in the event of a terrorist attack or natural disaster. They could change the parameters of national debate. They could skew elections. Experts say it will be a year or two before we have the technological ability to differentiate between genuine and fake videos. Until then, we are left with Deepfakes and no way of knowing if a video is true or not.
The implications are troubling. Some in the United States fear that Deepfakes may become a troubling feature of November’s congressional midterm elections but the risks are global. As Hany Farid, a digital forensics expert at New Hampshire’s Dartmouth College, has said: “The technology, of course, knows no borders, so I expect the impact to ripple around the globe.”
Social media are likely to be the natural platforms for the dissemination of fake videos. Plagued by misinformation, rumour, canard and slander, social media are well able to feed biases and fuel communal, cultural and ideological tensions. With its high usage of social media, the Middle East and North Africa region runs great risks if Deepfakes overrun news feeds and take over online conversations.
As we know, social media can sometimes be less a channel for democratic debate than a dangerous catalyst of heightened tension. In generally conservative Arab societies, a backlash against technology could have profound consequences. If technological advances in information delivery are considered untrustworthy, it could prejudice the region’s appetite for early adoption. Worse, it could become an argument to inhibit progress.
Big tech companies say they’ll fight the good fight against fake material, just as with terrorist propaganda. YouTube has pledged that “authoritative” news sources will be more prominently displayed, particularly when big stories break.
YouTube’s Chief Product Officer Neal Mohan said that 10,000 “human reviewers” at Google will help determine which news stories and videos should be billed as “authoritative.” But YouTube also admitted it would need to spend a great deal more in the next few years to meet “emerging challenges,” such as misinformation.
A lot is at stake when it comes to internet content. The world community must act to crack down on the creators and distributors of bad, mad and dangerous untruths.