Opinion: Christchurch and Regulation
The recent Christchurch terror attack is representative of a toxic ideology that is infecting social discourse the world over. In this blog post, analyst Michael Urquhart explores the role of social media companies in policing content on their platforms, and the effect that regulation could have on the character of extremist discourse online.
Arraigned in court for the killing of 50 Muslims at prayer on Friday 14th March 2019, the Christchurch gunman is representative of an anti-Muslim, anti-migrant ideology that is infecting social discourse the world over. News reports and media commentary have noted that the victims were fired upon “indiscriminately” during the attack. Words lose their meaning so fast – it was precisely anti-Muslim discrimination that decided the victims’ fate. The names scrawled on the murder weapons are testament to the violent xenophobia that impelled the white ethno-nationalist to act: ‘Alexandre Bissonnette’ – the perpetrator of the 2017 Quebec mosque attack, or ‘Charles Martel’ – a Frankish military leader known for his role in defeating the armies of the Umayyad Caliphate that had been busily exerting their influence over the Iberian Peninsula in the early eighth century.
Aside from highlighting once again the potency of extreme right-wing ideas, the attack has refocused public attention on the role of the internet in the spread of hate. With tech and social media companies under increasing pressure to sanitise their platforms, it is clear that more needs to be done to restrict the availability of extremist content online, which raises important questions: Who is responsible for regulation? What impact will attempts to that effect have on extremist discourse? And what does the impulse to censor tell us about social understandings of the theory of ideology?
Within a few hours of the shooting, social media channels were rushing to remove all traces of the attacker’s online presence in a vain attempt to corral his worldview. Particular efforts are being devoted to headcam footage of the attack, livestreamed for 17 painful minutes. It is certainly a moral necessity that the video is removed, like any sordid snuff film, but the architecture of social networks ensures that regulators are always playing catch up.
Facebook removed some 1.5 million clips within 24 hours of the attack and is still struggling against a Niagara of uploads. YouTube has been similarly overwhelmed, noting that volumes were “unprecedented both in scale and speed – at times as fast as a new upload every second.” That is to say nothing of those websites that are more hospitable to extreme ideology: Gab.com, a far-right hangout with 850k subscribers, is replete with comments celebrating the attack. Sajid Javid has remarked that “online platforms have a responsibility not to do the terrorists’ work for them” – this is true, though it begs the question as to what impact state policy might have had in the meantime.
Tech companies are demonstrating a readiness to innovate in their response to crisis – YouTube has disabled a feature that allows users to search for recent uploads, blocking new pathways to the content; but shouldn’t the emergency response be buttressed by preventative legislation? The UK Home Secretary cites an impending review of the government’s position on the regulation of extremist content, but until the terms of the ‘Online Harms’ white paper are made public (having already been delayed since late-2018), shifting the blame to social media platforms could be read as opportunistic.
Even if regulatory efforts were advanced, it would be difficult to target content with any precision, especially given the extreme right’s tendency to trolling behaviour online. The gunman is well-versed in shit-posting, laying thinly-veiled traps to confuse analysis and obfuscate rhetoric. In that context some have recommended against inspecting the document in any great detail, observing that the motivation behind the attack is in plain view and that wading through 74 pages is a waste of time. The first premise is a fair one. The second is, I think, hasty. It may certainly be too much to hope for any intellectual theory to emerge from the tedious manuscript; even a cursory reading of The Great Replacement reveals the inconsistency that is often characteristic of ideologies per se. But that is not to say that nothing could be learned from the text, which does repay some measure of attention. The following appears in a base, self-congratulatory FAQ:
Q: From where did you receive/research/develop your beliefs?
A: The internet, of course. You will not find the truth anywhere else.
You can smell the sarcasm. But this extract does suggest something about the warped epistemology that defines right-wing extremism in the digital age. Coded inside jokes, wild conspiracy theory and obtuse hyperbole fester in dark corners of the internet. To some extent, the truth claims of far-right ideologies are less important than the impulse to troll with rhetoric oriented around hateful ideas.
Viewed in this light, the Christchurch attack can be seen as a social media catastrophe, in which violent online role-play spilled over into reality, with disastrous consequences. Any attempts at regulation must somehow tend to these digital habits, especially given the risk that increased policing will drive expressions of right-wing extremism further into parody and complexity, which will only make the task more difficult. Regulators must also be mindful that censorship is often treated as a call to action by right-wing ideologues – in that regard, state or tech intervention might even accelerate the spread of the virulent ideas it is designed to protect against.
The metaphor of epidemiology is instructive; the transmission of ideology is often discussed as though it were analogous to infection. Exposure is certainly a necessary condition for radicalisation – without it we might expect people to be radicalised by their kettles – but the presence of a hateful ideology is not necessarily sufficient. We would do better to understand the matter as a kind of chemical reaction, in which text and reader are connected in a reciprocal relationship. In one direction, texts communicate ideas by way of anecdotes, songs, images, flags, etc. Vice versa, a reader approaches an ideological text carrying the baggage of life experience – expectations, prejudices, biases, stereotypes, and the rest of it – which largely determines the process of interpretation.
For this reason, the mode of regulation currently being discussed (i.e. removal of content) is only half of the story. To fully address the issue of extremism, society needs to also account for the personality infrastructure that admits of hateful ideas in the first place. This, of course, is easier said than done.
Media Measurement is a leading digital research consultancy. We breathe life into data, telling meaningful stories that inspire fresh thinking. Today’s global digital village holds billions of conversations connecting people, ideas and opinions – this revolution is a force for good.
We work with organisations who have a positive mission. Who dare to challenge the status quo. Who recognise the opportunities hidden in digital data so they can act – when and where it matters.
MM. INSPIRING DIGITAL INTELLIGENCE