The Christchurch Call and Eliminating Violent Extremism Online

On March 15th, the world witnessed an atrocity that left fifty-one people dead at a mosque in Christchurch, New Zealand. A live stream video capturing the massacre circulated online across social media platforms for two months and enraged people across the globe.

The international community provided a response on May 15th. New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron announced the formation of a global initiative to combat online extremism and related terrorism. “The Christchurch Call to Action”  (The Call) is an agreement between countries and tech companies to unite in this difficult endeavor.

Ardern and Macron called upon countries and tech companies to voluntarily join this global initiative. An impressive list heeded this request. The purpose of The Call is to transform the internet into a safer environment through cooperation, education and research whilst protecting basic human rights and freedoms.

This global commitment stands tall against the United Kingdom’s Online Harms White Paper. In opposition to The Call, London suggested watchdogs, regulations, and fines to govern its cyberspace. The Christchurch Call offers a global voluntary commitment to making the internet safe, through collaboration between states and tech companies. It is important to give these entities the decision to join rather than threats of coercion. Joining on their own accord shows that The Call is a united front against online extremism.

Amazon, Facebook, Google, Microsoft, and Twitter released a nine-point plan, and a joint statement in response to The Call. This preliminary framework lays out five individual plans and four collaborative efforts, offering better security, updating terms of service, education, and shared technology development.

The United States represents one of the countries that were unwilling to join. Washington stated that while they supported the overall goal, it was not an appropriate time to sign on. Concerns rest with freedom of expression. In the past,  the Trump Administration accused social media companies of denying these rights.

The governance of cyberspace presents the main issue for American interests. Cyberspace mirrors the Wild West. It is largely self-governed where no state can claim authority. The only entities who manage it are people and companies. The Call initiates the conversation over the governance of cyberspace and if it can be governed in the first place.

If signed, states not only volunteer to safeguard the internet, but for it to be governed by all signatories. It is problematic if these countries do not agree with one another. Many countries use cyberspace for various purposes that may conflict with The Call and signing it may forfeit states’ rights to act in cyberspace freely.

Another point of interest is the co-existence of the Online Harms White Paper and The Call. They both tackle the same issue but in different ways. The differences in approaching the same problem creates possible dysfunction. Already there is a conflict of interest regarding appropriate methods of combating online extremism and online terrorism between states who have signed The Call.

Ideas and solutions must be consistent in order to regulate cyberspace. Discussion over how to achieve goals is expected but one country implementing punitive regulations and another pursuing a holistic approach sends a mixed message.

As it stands, the Christchurch Call to Action appears as a list of strategies states and tech companies plan to implement. These include calls for transparency, collaboration, and better security. Terrorism is a complicated social issue, but having key actors working together to counter online terror and extremism is a giant leap forward. It will be interesting to witness how states work with each other and how they collaborate with tech companies to address the issue.

The EU Calls for Removal of all Extremist Content on Social Media

The European Union has given social media companies like Google, YouTube, Facebook, and Twitter three months to demonstrate that they are making efforts to rid their platforms of extremist content in order to fight online radicalization. This has been a significant issue in Europe, and the European Commission hopes that by removing extremist content an hour after notification, social media companies can halt the proliferation of radicalization and extremist ideologies [1].

This could certainly help stop the lone-wolf radicalization phenomenon that’s been occurring online, but certain realities of this plan remain unclear. The proposal adds to the existing, voluntary system agreed by the EU and social media companies, under which social media platforms are not legally responsible for the content circulating on their sites [2].

It’s unclear how feasible the EU proposal is since companies’ attempts to deliver on the one hour mandate will be a struggle. For example, Google currently reviews 98% of reported videos within 24 hours [3].

The recommendations are non-binding, but could potentially be taken into account by European courts. For now, they are meant as guidelines for how companies should remove illegal content [4].

The next few months will demonstrate how the EU will proceed and whether tech companies will become more helpful in the fight against violent extremism. While it is certainly a step in the right direction with regard to decreasing online radicalization, there will be pushback from companies that find the increased effort and potential legal battles bothersome.

[1] Gibbs, S. (2018, March 1). EU gives Facebook and Google three months to tackle extremist content. Retrieved March 1, 2018, from

[2] Social media faces EU ‘1-hour rule’ on taking down terror content. (March 1, 2018.). Retrieved March 1, 2018, from

[3] Social media faces EU ‘1-hour rule’ on taking down terror content. (March 1,2018). Retrieved March 1, 2018, from

[4] Gibbs, S. (2018, March 1). EU gives Facebook and Google three months to tackle extremist content. Retrieved March 1, 2018, from