Chat Control: It’s Baaack

Europe is on the verge of building an all-seeing machine

As I wrote previously, the EU’s liberticidal Chat Control law seemed to have been definitively shelved in October. Strong opposition from Germany and other member states led to the belief that Brussels’ Orwellian project had finally foundered. But anyone familiar with the mechanisms of the European Union knows well that proposals unwelcome to citizens are rarely abandoned; they are simply reformulated, renamed, and re-proposed away from the public eye.

And that’s exactly what happened. Chat Control is back, disguised as Chat Control 2.0. This time the game took place where European democracy goes to die: in the closed rooms of COREPER, the Committee of Permanent Representatives, one of the most powerful yet least visible institutions in the EU’s decision-making process.

Brussels’ Permanent Strategy of Attrition

The original Chat Control Plan, formally known as the Child Sexual Abuse Regulation, was proposed by the European Commission in 2022. The stated goal was noble: to detect child pornography through mandatory scanning of private communications, even encrypted ones. But critics immediately denounced what the measure really represented: a model for widespread surveillance that would have given states and EU institutions the ability to scan every private message.

A public consultation prior to the proposal revealed that over 80% of participants explicitly opposed the application of such measures to end-to-end encrypted communications. But this didn’t stop the Commission. As Thomas Fazi writes in UnHerd, the proposal has been repeatedly resubmitted from one Council presidency to the next, each time under new labels, always packaged as “necessary” and “urgent” but always retaining the same underlying logic: to normalize government monitoring of private communications on an unprecedented scale.

Chat Control 2.0: The Wolf in Sheep’s Clothing

Following opposition last May from Germany, Poland, Austria, and the Netherlands, the current rotating president of the European Council — Denmark — immediately began drafting a new version. The result is Chat Control 2.0, presented in early November and quietly approved by COREPER, paving the way for final adoption by the Council, perhaps as early as December.

The new version formally eliminates the general requirement to monitor private chats, making searches “voluntary” for providers. According to Patrick Breyer, digital rights activist and former MEP, this maneuver amounts to “a deception” aimed at circumventing meaningful democratic debate.

Hidden Dangers in the New Text

Despite the apparent improvement, Chat Control 2.0 contains two extremely problematic features. The first is it encourages “voluntary” mass scanning by online platforms, a practice already permitted on a “temporary” basis but which would become permanent. The second is the de facto ban on anonymous communication through mandatory age verification systems.

An open letter signed by 18 leading European academics specializing in cybersecurity and privacy warned that the latest proposal poses “high risks to society with no clear benefits for children.” The central issue is the expansion of AI-based scanning to identify “grooming” behavior. Current AI systems are unable to properly distinguish between innocent conversations and abusive behavior. As Breyer explains, no AI can reliably distinguish between innocent flirting, humorous sarcasm, and criminal grooming. The consequence? A digital witch hunt in which words like “love” or “meet” in conversations between family members or partners could trigger intrusive state scrutiny.

The numbers confirm these fears: the German Federal Police warns that approximately half of all reports received are irrelevant from a criminal standpoint. In Switzerland, 80% of the content flagged by machines is not illegal; it is often harmless vacation photos showing children playing on the beach.

The End of Encryption and Anonymity

Article 4 of the new proposal requires providers to implement “all appropriate risk mitigation measures.” This vague clause could allow authorities to pressure encrypted messaging services to enable scanning, even if this compromises their basic security model. In practice, platforms like WhatsApp, Signal, or Telegram could be forced to scan messages on users’ devices before encryption is applied.

The Electronic Frontier Foundation notes that this approach creates a permanent surveillance infrastructure that could gradually become universal. And it doesn’t just affect EU citizens. If a platform decides to remain in the Union, anyone in the world who chats with a European citizen will see his privacy compromised.

Even more serious is the introduction of mandatory age verification systems for app stores and messaging services. While the Council claims that these systems can “preserve privacy,” critics insist that the concept is technologically unfeasible. Age assessments inevitably require biometric and behavioral data, increasing the volume of sensitive information stored and potentially exploitable. Requiring identity documents to verify yourself online would spell the end of anonymous communication, with disastrous consequences for whistleblowers, journalists, and political activists who rely on anonymity. It would also push minors under 16 toward less secure and poorly regulated alternatives.

A Solution that Doesn’t Solve the Problem

Critics argue that mass surveillance is simply the wrong approach to combating child sexual exploitation. Scanning private messages does not stop the spread of child pornography. Platforms like Facebook have been using scanning technologies for years, yet the number of automatic reports continues to rise. Criminals distribute material through decentralized forums or encrypted archives shared with links and passwords, methods that scanning algorithms cannot penetrate. The most effective strategy would be to remove child pornography from online hosts, something EUROPOL has repeatedly failed to do.

The Ghost of “Function Creep”

Of particular concern is the phenomenon of “function creep”: the process by which a technology introduced for a limited purpose gradually expands. Britain’s Online Safety Act and the “temporary” measures of the post-9/11 U.S. Patriot Act demonstrate how, once a surveillance infrastructure is established, it can be easily repurposed and is difficult to dismantle.

As Breyer aptly summarizes: “They sell us security, but they provide us with a total surveillance machine. They promise child protection, but they punish our children and criminalize privacy.”

Europe is on the verge of building an all-seeing machine. And once built, it will serve not only current political authorities but whoever holds power in the future. The window to stop Chat Control 2.0 is rapidly closing.

 

Sabino Paciolla graduated with honors from the Faculty of Economics and Business at the University of Bari, majoring in Statistical and Economic Sciences. He holds a Master's degree in Corporate and Investment Banking from SDA Bocconi. He worked at an international banking institution in corporate and restructuring matters. A specialist in economics and finance, he closely follows economic trends, financial markets, and central bank monetary policies. He also follows the current cultural and political landscape. He is married with four children, and blogs on Catholic issues (in Italian) at sabinopaciolla.com

From The Narthex

A Little Boy's Hero

Roger Whittaker, the Kenyan-born British singer who died last year, was best known for his…

My Ticket to Heaven

I was a friend of the family for 40 years and was visiting Minola, its…

On Liturgical Change

Everyone -- whether priest, bishop, or layperson -- has ideas on how to change the…