Though terrorist organisations today often operate without clear borders, their digital footprints travel freely, amplified by algorithms, shared by users, and at times left unchecked by the very platforms that host them. In response, a global reckoning is underway over the question of liability: should social media companies be held responsible for the spread of terrorist content online?
This debate is far from settled. Across the globe, governments are adopting radically different legal frameworks to regulate tech platforms’ responsibilities. These approaches reflect not only varying levels of technological capacity and political will but diverging philosophies about the balance between public safety, freedom of expression, and corporate accountability. The result is a fragmented legal landscape in which global platforms must navigate contradictory demands from nation-states, sometimes with grave human rights implications.
Diverging Approaches in Europe and the United States
The European Union has taken a more interventionist approach with its Digital Service Act (DSA), enacted in 2022. This legislation requires large platforms like X (formerly Twitter), Meta, and YouTube to assess and mitigate systemic risks, including the dissemination of terrorist content. The DSA’s enforcement was tested during the October 2023 Israel-Hamas conflict, when graphic images and propaganda spread rapidly. The EU opened an investigation into X for allegedly failing to remove illegal content, prompting Internal Market Commissioner Thierry Breton to issue a formal warning. Breton made it clear that X’s subsequent withdrawal from the EU’s voluntary Code of Practice on Disinformation would not exempt it from its legal obligation under the DSA. ‘You can run, but you can’t hide,’ he warned. The incident underlined the EU’s intent to hold platforms accountable and its belief that freedom of expression must be balanced with civic responsibility. However, a structural limitation of the DSA lies in its procedural ambiguity: the regulation does not impose a legal deadline for concluding formal proceedings. Additionally, the length of an in-depth investigation is contingent upon various factors, including the intricacy of the case, the degree of cooperation from the platform under scrutiny, and the procedural rights exercised by the company in question.
In contrast, the United States upholds one of the strongest legal shields through Section 230 of the Communications Decency Act, granting platforms a broad immunity from liability for user-generated content. As supported by American jurisprudence, Section 230 buffers them from being treated as publishers of third-party material and protects platforms that remove harmful content in good faith. These provisions have enabled moderation without liability, but continue to fuel debate over platform accountability and content amplification. In Twitter Inc. v. Taamneh(2023), the US Supreme Court held that platforms like X, Twitter at the time, cannot be held liable for terrorist content merely for hosting it. In fact, liability requires proof that the platform knowingly provided substantial assistance to a specific act of terrorism. The Court warned that imposing broader liability would ‘run roughshod over the typical limits on tort liability’ and risk treating any communications provider as complicit simply for failing to prevent misuse of its services. While critics argue that Section 230 has been interpreted too broadly, granting platforms sweeping immunity, defenders caution that reform could trigger over-censorship.
Regulatory Variations Across Jurisdictions
In different political contexts, approaches to regulating terrorist content often reflect broader governance priorities, including the management of dissent and control over information environments. In India, authorities have used the Unlawful Activities Prevention Act (UAPA), a national instrument, to compel social media companies to remove content related to terrorist activities, as well as the Financial Action Task Force (FATF), an international instrument, to tackle money laundering, terrorism and proliferation financing. Nonetheless, the use of these instruments by the Indian government has prompted alarm. Amnesty International argues they have been weaponised to suppress legitimate dissent and dismantle civil society. While UAPA was deployed to arrest activists and human rights defenders, the FATF framework has been invoked through foreign funding laws to restrict the operations of critical NGOs. Amnesty contends that the use of both instruments has a chilling effect on civil society and free expression, raising concerns about compliance with international norms on human rights and the rule of law. Turkey has taken a similarly assertive approach, passing the ‘Disinformation Law’ in 2022, requiring platforms to appoint local representatives and comply with takedown orders or face penalties. While aimed at combating disinformation, its vague criminalisation of ‘misleading information’ threatens free expression. Critics argue it increases government control over content and fuels media repression and self-censorship rather than fostering reliable information. Likewise, Human rights groups report that terrorism and defamation charges under this law have been deployed to intimidate journalists, human rights defenders, and civil society actors, raising concerns about due process, media freedom, and the chilling of democratic debate.
In conflict-prone regions such as Myanmar and Ethiopia, the challenge is often one of under-moderation. Meta has faced criticism for failing to adequately monitor hate speech in languages other than English. In Myanmar, according to a United Nations Report, Facebook became a platform for inciting violence against the Rohingya minority, contributing to real-world atrocities. In Ethiopia, similar failures were evident during the Tigray conflict, when posts inciting ethnic violence and misinformation circulated widely with minimal intervention. These cases illustrate how enforcement disparities, particularly in multilingual and high-conflict environments, expose the limits of global platforms’ current content moderation strategies.
Competing Values and Shared Frameworks
At the heart of this global debate lies a fundamental tension: how to reconcile the protection of free expression with the need to prevent real-world harm. In some jurisdictions, as noted, there is a growing recognition that platforms must assume greater responsibility, particularly when engagement-driven algorithms amplify harmful content. Yet determining the threshold for liability remains deeply contested, both legally and ideologically. Compounding this challenge is the absence of a unified international standard; in fact, what constitutes a terrorist in content in one country may be viewed as protected speech in another. Governments leverage these ambiguities to demand takedowns or justify inaction based on political expediency rather than principle. These inconsistencies not only expose users to unequal protections but also place tech platforms in the impossible position of being arbiters of global speech.
Efforts towards consensus are emerging. For instance, the Christchurch Call, initiated by France and New Zealand, encourages states and tech firms to collaborate in tackling terrorist content online. Similarly, the Global Internet Forum to Counter Terrorism (GIFCT) supports a shared database of extremist material. Nonetheless, criticism persists regarding transparency and governance. While some advocate for a binding international framework, a ‘digital Geneva Convention’, to establish baseline norms for content moderation, others argue that it is unnecessary, asserting that existing Geneva Conventions already extend to cyberspace. They caution that creating a new, standalone legal regime could lead to redundancy, legal confusion, and political impracticality.
Though the road to convergence is long, the stakes are undeniable: a global digital commons demands a shared principled approach. As the legal battleground over terrorist content intensifies, it becomes clear that digital platforms are not passive conduits but active architects of the online environment, whether through algorithmic design, content moderation practices, or compliance with government mandates. The liability question thus transcends legal technicalities. It reflects deeper normative struggles over who gets to define harm, enforce norms, and shape public discourse in the digital age. In the absence of coherent international standards, the fragmentation of regulatory approaches risks entrenching inequalities, empowering authoritarian tendencies, and undermining both the protection of human rights and the effectiveness of counterterrorism efforts.
By Charlotte Soulé, Rise to Peace intern and Masters of International War Studies student at University College Dublin