The Liminal Agent: A New Model of Online Radicalisation?

In his recent essay, online extremism researcher Joshua Citarella sketches one of the new psycho-political cartographies of our time: a theory of how the internet no longer merely hosts extremism but actively incubates, mutates, and weaponizes it. His model of online radicalisation, one that emerges from self-directed isolation, deep cognitive immersion, and ideological bricolage, resonates profoundly with what Rise to Peace has identified as the domain of Liminal Warfare: the weaponization of thresholds, ambiguity, and incomplete identity in modern conflict spaces.

Increasingly, the traditional models of threat assessment borne from old threat matrix’s which privilege hierarchical organizations, clear ideological movements, and structured recruitment are inadequate to describe the contemporary threat landscape. In Citarella’s vision, the world of online radicalisation has irreparably changed. Radicalisation today is decentralized, asynchronous, memetic, and most importantly, liminal. It primarily happens in the unstable interzones between identity, ideology, and action.

Citarella’s Model

Citarella proposes that radicalisation, at least in the West, often begins not from prior ideological conviction, but from the condition of existential boredom, social/economic alienation, and exploration. A young person, often socially isolated and politically disenchanted, stumbles into online subcultures while seeking meaning, excitement, or community. They begin to scroll; into gaming forums, irony-poisoned meme pages, “political compass” esoterica, survivalist groups, and ideological echo chambers.

What matters is not a coherent doctrine, but the immersive ritual of searching itself. Platforms that favour algorithmic serendipity (TikTok, YouTube, Reddit) reinforce a pattern of escalating extremity, either through exposure to increasingly niche ideologies or by creating a pseudo-gamified environment where ideological commitment becomes performative currency. Politics becomes the hobby to end all hobbies. In this schema, most individuals never stabilize, instead drifting aimlessly and incoherently from one ideology to the next. They collage incompatible belief systems – eco-fascism one month, anarcho-primitivism the next, post-left accelerationism shortly after, creating an identity formation that is non-linear, recursive, and radically unstable. It is a process that could aptly be described as ‘liminal radicalisation’; the process of radicalisation itself is continuous and disaggregated, with no clear destination in mind.

The Political Economy of Alienation

What Joshua Citarella names as online radicalisation is, in fact, better understood as an emergent symptom of the wider economic decomposition and austerity that has driven radicalisation from the USA to Europe. Beneath the memetic ironies and aesthetic subcultures, beneath even the performative hatred, what one finds is a generation economically stranded and structurally abandoned. These are not natural ideologues – rather, they are young people who feel the burden of a collapsing horizon, where the prospect of an attainable middle-class life has disappeared as material circumstances decline.

In this sense, the online radicalisation pipeline is not ideological in its origins, but material. The typical subject is downwardly mobile, debt-strapped, and shut out of every traditional rite of social mobility: property, partnership, stability, meaning. Where civic institutions, societal inclusion and careers once were, they are confronted instead with deindustrialisation, economic alienation and precarity. Their beliefs are downstream from their estrangement. Liberal democracies, unable or unwilling to address the foundational material crises of our time in housing shortages, wage stagnation, job insecurity and the erosion of public life, have instead left a vacuum into which this new ecology has rushed. It is precisely this vacuum that provides such fertile ground for the blossoming of new, discordant political radicalisation amongst the disaffected online.

Radicalisation as Ritual, Not Recruitment

What Citarella outlines aligns precisely with what we at Rise to Peace conceptualize as the liminal domain of contemporary conflict. Liminality describes the in-between state: the adolescent undergoing a rite of passage, the refugee severed from homeland, the online user between algorithms and reality. In warfare, the liminal is where traditional rules of engagement dissolve, replaced by new architectures of influence, disorientation, and emotional capture. Crucially, Liminal Warfare weaponizes affect before ideology. It seeks to keep populations in a suspended state of insecurity, overstimulation, and yearning, thus rendering them perpetually vulnerable to new vectors of control, recruitment, or activation. Radicalization, under these conditions, is no longer a matter of persuasive argument or charismatic leadership, but rather the ambient result of prolonged cognitive dislocation.

The Liminal Agent

How can we position this new pipeline of online radicalisation?  Doing so requires designating a new actor in the matrix of terror and radicalisation: The Liminal Agent. These are individuals or small cells who do not adhere to conventional organizational structures, but whose radicalization journey makes them latent nodes of potential disruption.

They often exhibit the following features:

  • Non-linear ideological trajectories (far-right to eco-terrorism to esoteric nihilism within months).
  • Memetic accelerationism (using memes not merely as propaganda but as a form of psychological conditioning).
  • Fluid affiliations (no loyalty to any single group, cause, or doctrine).
  • Stochastic violence potential (low predictability of timing, targets, or methods).

Citarella’s model gives empirical substance to this theory. The emerging radical does not require recruitment, as they radicalize through participation. They do not need ideological discipline, as they need only the internet and its ideological input. This is a battlefield of perpetual pre-recruitment, where being “in play” is more important than belonging.

Implications for Counter-Radicalization

The classic counterterrorism model – disrupt leadership nodes, monitor recruitment pipelines, disrupt communication channels – struggles to address this reality. How do you intercept a process without a recruiter? How do you “deradicalize” someone who has never fully radicalized to begin with, but exists in a permanent state of cognitive threshold-crossing?

Such implications require three necessary shifts:

  1. Intervention at the Affective Level: Programs must target emotional needs (belonging, agency, recognition) rather than merely correcting disinformation or promoting tolerance.
  2. Narrative Counter-Liminality: Instead of offering fixed counter-narratives, interventions must provide adaptive narrative scaffolding; ways to help individuals navigate uncertainty without collapsing into extremism.
  3. Liminal Early Warning Systems: Indicators of drift (increased engagement with irony-laden extremist memes, withdrawal from non-digital communities, pattern acceleration) must be mapped and monitored, not just explicit pledges of allegiance.

This, however, represents only an intervention at the level of symptoms. The frameworks proposed here; narrative scaffolding, affective early warning systems, memetic analysis, can only help us insofar as they map the terrain of liminal radicalisation, but they cannot on their own treat its cause. What Citarella’s model ultimately reveals, and what we must refuse to obscure, is that online extremism today is less a question of ideology than of material infrastructure – social, economic, and psychological. It is not born from belief but from absence: the absence of economic security, of community, of a shared future. This absence collapses the very ideological architectures that once made radicalisation intelligible and coherent, and much more is required to be done and researched about this new pipeline of radicalisation that has emerged online if governments and civil society have any hope to limit the spread of its contagion.

Etienne Darcas, Counter-Terror Research Fellow and Media & Terror Program Lead, Rise to Peace

Trends of 2020: What increased internet has meant for terrorism in Europe

The European Union, United Kingdom and Switzerland have had an unconventional year for identifying trends in terrorist activity. The COVID-19 pandemic and subsequent lockdowns, travel restrictions, and digitization of everyday life have posed difficulties for some terrorist groups and opportunities for others.

A Europol report on terrorism in Europe declared that in 2020, six EU member states experienced a total of 57 completed, foiled, or failed terrorist attacks. Taking the UK into account, the number increases to 119. Upon analysis of their data, Europol revealed that all completed jihadist attacks were committed by individuals supposedly acting alone. Three of the foiled attacks involved multiple actors or small groups. All the attackers in the UK and EU were male and typically aged between 18 and 33, and in only one case in Switzerland was the perpetrator a woman. The same report identifies right-wing extremist trends over the last three years. Findings depict similarities between Islamist terrorists and right-wing terrorists in terms of age and gender. Right-wing terror suspects are increasingly young in age, many of which are still minors at the time of their arrest. Right-wing suspects appear intricately connected to violent transnational organizations on the internet.

COVID-19 lockdown restrictions have vastly increased European citizens’ reliance on the internet for everyday tasks, both professional and recreational. Statista recently released data showing that 91% of EU households had internet access in 2020, reaching an all-time high. But with the increased access and usage of the internet comes the risk of it being used for malicious purposes, specifically for terrorist organizing. The quantity of propaganda produced by official ISIL media outlets reportedly decreased in 2020. Despite this, ISIL continues to use the internet to stay connected to potential attackers who align themselves with the same ideology. These connections have allowed ISIL to call for lone actors to commit terrorist attacks. The data from Europol’s 2020 report confirms that it was lone-actor attacks that comprised most of the “successful” terror attacks in 2020, while attacks planned in a group were typically prevented.

Their right-wing extremist counterparts have developed sophisticated methods of recruitment in the internet age, particularly over the last year. Right-wing terror suspects have developed communication strategies via gaming apps and chat servers typically used by gamers. Presumably to attract a younger demographic, right-wing extremists with links to terror suspects have diversified their internet use to include gaming platforms, messenger services, and social media. In the wake of the coronavirus pandemic and vaccination programs, the Centre for Countering Digital Hate notes that Discord has been a vital tool for spreading disinformation and conspiracy theories involving racial hatred. In this case, strategies used in online games to reward progression have been translated to serve right-wing propaganda. Thus, points are awarded to the most active members of certain discord servers who can fabricate and promote conspiracy theories, often including antisemitic tropes involving Bill Gates. Virtual currency plays a key role in promoting the narrative of success and reward, and its ability to capture the interest of minors who are active in the virtual space.

Combating terrorist threats in Europe has always been a challenge on account of the sporadic nature of terrorists themselves. While the people behind the attacks may vary in socio-economic upbringing, religious affiliation and nationality, some similarities remain. Based on the commonalities, solutions to tackling internet-based strategies could be introduced. If the EU were to develop a common framework for disrupting and taking down radical groups online, it could find greater success in combating digital extremism. ISIL online networks on Telegram were taken down in November 2019, and they have since struggled to recreate networks to a similar degree.

Gender and age also give some insight for where to begin in diminishing future recruitment to ideology-based terrorism. While internet usage cannot be regulated, education can. Europe may benefit from the cooperation of educational institutions at all level in raising awareness of the dangers of online radicalization. Workshops, information posters, and seminars introducing the intricacies of radicalization would inform vulnerable students on the potential downfalls of internet consumption. This would create a clear understanding of modern conspiracy theories, where they come from and why they exist.

Additionally, understanding the meaning behind extremist imagery, symbols, numbers, phrases, and music (as well as how to report them on the internet) would increase awareness among otherwise distracted students consumed by online trends and activity.

Paired with the awareness commitment, the EU should set a budget meeting the needs of mental health services in schools to introduce spaces in which students may express their concerns. This in turn could curb their vulnerability to online extremist groups looking to recruit.

Content Moderation Presents New Obstacles in the Internet Age

Image Credit: Cogito Tech (Cogitotech)

The first instance of a terrorist recording violent crimes and posting it online occurred when Mohammed Merah — the perpetrator of the 2012 Toulouse and Montauban attacks in France — did just that with his GoPro. Seven years later, the culprit of the Christchurch mosque shootings used a similar method. These attacks both beg the same question: How are social media platforms like Facebook, YouTube and Twitter handling extremist content posted to their sites?

As a consequence, tech giants began the process of addressing this problem and seek to formulate a specific mechanism that targets extremist content. Facebook and Google focus significant attention towards development of their automated systems or AI (Artificial Intelligence) software to detect and eventually remove content that violates their policy.

The Global Internet Forum to Counter Terrorism (GIFCT) acts as a cooperative between tech companies to pool extremist content already in existence. A key purpose is to create unique digital fingerprints of contentious material called “hashes.” Hashes are then shared within the GIFCT community to ensure an expanded reach to tackle such material efficiently and the burden is lifted upon a single network to contain the bulk.

YouTube uses techniques like automated flagging also. Membership of their Trusted Flagger Program includes individuals, non-governmental organizations (NGO’s) and government agencies that are particularly effective at notifying YouTube of content that violates its Community Guidelines. YouTube has removed 8.2 million videos from its platform using these techniques as of March 2019.

In a Wired interview, Facebook’s Chief Technology Officer (CTO) Mike Schroepfer described AI the “best tool” to keep the Facebook community safe. AI is not infallible though, as it sometimes fails to understand the nuances of online extremism and hate. This is the point where the human moderators enter the picture.

The Verge provided a detailed piece detailing the lives of Facebook content moderators. Once the post has been flagged, the moderator can either delete it, ignore it or send it for further review. The moderators are trained to look at signs that are distressing for any number of people.

It took 17 minutes for the original live stream of the Christchurch attack posted on Facebook to be removed. That was more than enough time for it to be downloaded, copied, and posted to other platforms. Facebook claims it removed 1.5 million copies of the Christchurch footage within the first 24 hours, but copies remain.

Content moderation is such a mammoth task for social media companies because of the sheer scale of their operations. Millions of people are online and accessing these services at the same time. Errors are expected. The Christchurch attack exposed a glaring shortcoming in content reporting: livestreaming. Moderation has mechanisms for standard uploaded videos but there are not enough tools to moderate a livestream.

Another issue facing social media companies remains the tech savvy nature of modern extremists. Such content can be uploaded by manipulating audio and video quality to bypass the filters in place. Language poses another problem as most of the automatic content moderation is English-language based. Nearly half of Facebook users do not speak English therefore the company needs to expand its technology to incorporate other languages.

Facebook, YouTube, Twitter and Instagram continue to develop their AI tools and improve their human moderator strategies. Nevertheless, the sections taking advantage of current security loopholes are evolving as well. With 4.3 billion internet users in the world in March of 2019, content moderation itself is under scrutiny.