The UK Prime Minister is using the annual G7 summit of seven of the world’s major industrialized democracies to push for more to be done about online extremism, including co-ordinating on ways to force social media platforms to be more pro-active about removing and reporting extremist content to authorities.
Theresa May is chairing a counter-terrorism session at the G7 summit today in Sicily, meeting with the leaders of the US, Canada, Germany, France, Italy, Japan and representatives of the European Union.
It’s a drum her own Home Secretary has been banging at home in recent months. And just the latest instance of the political thumbscrews being applied to social media giants.
In Germany in April, for example, the government backed proposals to levy fines of up to €50 million on social media firms that fail to promptly remove illegal hate speech from their platforms.
Before leaving for the summit yesterday the BBC reported May planned to lead a discussion with her fellow world leaders on how to “work together to prevent the plotting of terrorist attacks online and to stop the spread of hateful extremist ideology on social media”.
According to The Guardian, she is expected to tell her G7 counterparts that the fight against ISIS is shifting from the “battlefield to the Internet”, and to urge them to co-operate to enforce stricter rules on social media companies.
Specifically, the newspaper said May will press for social media firms to:
- develop tools that could automatically identify and remove harmful material based on what it contains and who posted it
- tell the authorities when harmful material is identified so that action can be taken
- revise conditions and industry guidelines to make them absolutely clear about what constitutes harmful material
The move follows another terror attack on UK soil after a suicide bomber blew himself up at a pop concert in Manchester on Monday evening, killing and wounding multiple people.
Although there have been no suggestions so far that social media platforms could have thwarted the attack by providing pre-emptive intelligence.
Indeed, the UK government’s own counterterrorism policies are facing the most uncomfortable questions in the wake of the attack, given the bomber had been repeatedly reported to police in the years prior. Yet was not, evidently, stopped from obtaining the know-how or materials to construct a bomb. Nor from successful executing an attack.
Other recent instances of terrorism on UK soil have included an attack in Westminster in March, when a lone attacker used a car and knives to attack pedestrians and police. The Westminster attacker apparently sent a WhatsApp message minutes before commencing the attack saying he was waging jihad in revenge for Western foreign policy.
While a homemade bomb planted on a London Underground Tube train in October last year, which failed to go off, had apparently been put together by the teenage perpetrator following instructions found online.
The problem for UK security services is they are under-resourced to meet the scale of the threat. As Muddassar Ahmed writes today in The Independent, there are around 3,000 people on UK terror watch lists — yet only 4,000 staff in MI5, the domestic intelligence agency. The agency simply does not have the manpower to closely monitor so many potential terrorists.
May has also faced specific criticism in the wake of the Manchester attack for cuts the government made to UK police numbers. She had apparently been warned two years ago that cuts to community policing in Manchester could threaten counter terrorism efforts in the city. The optics at this point look terrible.
Her G7 comments therefore risk looking like an attempt to shift both blame and responsibility — with May leaning out to apply pressure on social media firms in a bid to effectively outsource the responsibility for terrorism monitoring to tech platforms. At a time when her own government’s policy towards domestic policing and counterterrorism resourcing looks to be what’s lacking.
Perhaps the most significant push she’s making at the G7 is for the countries to co-ordinate on revising industry guidelines on harmful material and on the conditions they place on tech firms. Although, according to Guardian sources, she’s not advocating for financing penalties at this stage (such as those already on the table in Germany).
The paper quotes a senior government source saying May wants the the G7 nations to “move towards a common approach focused on the need to defeat [Isis]. In particular she wants to use G7 to call for the members to adopt a collective approach when working with tech companies on this agenda, and she will say that the industry has a social responsibility to do more to remove harmful content from its networks”.
However, whether she will be able to convince President Trump to join a collective attempt to apply extra pressure on US tech giants is not clear.
Much of what she’s generally calling for social media firms to do is already arguably taking place.
Facebook has previously said it is working on tools and technologies to try to automate flagging up extremist content, including CEO Mark Zuckerberg publicly discussing the potential of AI to help with content review.
Facebook also emphasizes that it does already reach out to authorities if it finds evidence of an imminent threat of harm or terrorism. And does already use AI and image and video matching technology when it identifies terrorist content to try to unearth related content and accounts, for example.
That said, ongoing criticism of the effectiveness of Facebook’s content moderation processes underline the scale of the company’s own challenge on this front. And the 3,000 additional moderation staff announced by Zuckerberg in the wake of another content moderation scandal represents a tiny drop in the ocean for a platform with nearly two billion users.
With the additional moderator headcount Facebook will still only have 7,500 people employed to review all flagged content — so more than MI5’s headcount but relatively far fewer, given the staggering scale of its content moderation challenge, which runs the gamut from reviewing and making judgements on extremism/terrorist related content; to child abuse and other criminal content; to hate speech and racism; to violence and cruelty of all stripes. All the while aiming to balance ‘safety with openness’, as it puts it, i.e. preferring the sharing of controversial/disturbing content — provided it’s not illegal/sadistic. (A very tricky balancing act to pull off, evidently.)
In a statement today responding to May’s comments on extremist content, Facebook’s Monika Bickert, head of global policy management, said: “We want to provide a service where people feel safe. That means we do not allow groups or people that engage in terrorist activity, or posts that express support for terrorism. Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it — and if there is an emergency involving imminent harm to someone’s safety, we notify law enforcement. Online extremism can only be tackled with strong partnerships. We have long collaborated with policymakers, civil society, and others in the tech industry, and we are committed to continuing this important work together.”
Twitter did not provide a statement when contacted for comment but a spokesman pointed to what he said are proactive technological steps the platform takes with regards countering violent extremism, such as using technological tools to surface accounts that are promoting terrorism. He noted, for example, that between July 1, 2016 through December 31, 2016, a total of 376,890 Twitter accounts were suspended for this reason — with 74 per cent of those having been surfaced by the platform’s tech tools rather than via user reports.
Featured Image: the lightwriter/Getty Images