Chinmay D. Pai
Musings on life, tech, and other interests

· 12 min read

Digital Dams That Don't Hold: Why Internet Censorship Fails as Technical Policy

Throughout history, governments and authorities have tried to control the flow of information. From the burning of the Library of Alexandria and the Catholic Church’s Index of Forbidden Books to Nazi book burnings and the Soviet Union’s censorship apparatus, controlling what people can read, share, and discuss has been a consistent impulse of those in power. In the digital age, this age-old impulse manifests as internet censorship—but with a crucial difference: the internet was fundamentally designed to resist it.

When authorities attempt to censor the internet, they’re effectively trying to plug holes in a digital dam with their fingers. The information doesn’t stop—it simply finds another path. This isn’t just a metaphor; it’s the technical reality of how the internet was architected, with protocols specifically designed to route around damage and disruption. As internet pioneer John Gilmore famously stated:

“The internet interprets censorship as damage and routes around it.”

Recent events in Spain and India highlight this technological reality in striking ways.

In Spain, LaLiga, the top Spanish professional football division, secured a court order allowing them to unilaterally block IP addresses they believe are being used to pirate livestreams of matches. Rather than targeting specific infringing services, LaLiga took the extreme step of blocking Cloudflare’s entire IP range when the games are live—affecting thousands of legitimate websites that happen to use Cloudflare’s infrastructure over the weekend.Cybernews - Spain’s fight against La Liga streaming pirates hurts thousands of innocent sites Spanish authorities complied with this overbroad implementation without pushback, demonstrating how easily targeted enforcement can expand into widespread disruption. In response, Cloudflare had launched a legal action against LaLiga for blocking these IPs, although the ban was still upheld by the Spanish authorities. LaLiga - Commercial Court No. 6 of Barcelona upholds the judgment issued in favour of LALIGA

In India, the Karnataka High Court directed the government to block ProtonMail entirely because a single Delhi-based design firm allegedly received emails with obscene content from a ProtonMail address.TechCrunch - Indian court orders blocking of Proton Mail This isn’t even the first such attempt in India—in early 2024, Tamil Nadu police requested a nationwide ProtonMail block after schools received bomb threats via the service. When Swiss authorities intervened in that case, ProtonMail aptly observed that “Blocking access to Proton Mail simply prevents law-abiding citizens from communicating securely and does not prevent cybercriminals from sending threats with another email service, especially if the perpetrators are located outside of India.”

These cases exemplify a fundamental misunderstanding of how the internet works—and why digital censorship is technically doomed to fail.

[link] The Continuing Pattern of Failed Censorship

The cases from Spain and India highlighted in the introduction exemplify a broader technical reality that has remained consistent throughout the internet’s history. While the specifics vary—from blocking infrastructure providers like Cloudflare to censoring communication platforms like ProtonMail—the underlying technical dynamics remain identical.

These cases aren’t isolated incidents—they’re emblematic of a persistent technological misunderstanding. Similar scenarios have played out consistently across different countries and contexts, with remarkably consistent technical outcomes. From Turkey’s YouTube blocks to Russia’s Telegram ban, the technical results follow a predictable pattern regardless of the political or social context in which they occur. To understand why these censorship attempts inevitably fail, we need to examine the fundamental architectural features of the internet that make censorship technically problematic, regardless of context or justification.

[link] The Technical Reality: Four Fundamental Flaws in Internet Censorship

Internet censorship faces several fundamental technical challenges that make it both ineffective and harmful, regardless of the context in which it’s implemented:

[link] 1. Digital Detours: Why Technical Circumvention is Inevitable

Any moderately technical user can bypass most blocking mechanisms using:

According to a 2017 Global Web Index studyGlobal Web Index - VPN Usage Study (2017) , approximately 30% of internet users worldwide have used VPNs, with that number rising to over 60% in countries with more internet restrictions. In countries with significant censorship, these circumvention tools become common knowledge, rendering blocks largely symbolic rather than effective.

[link] 2. The Streisand Effect: When Censorship Becomes Publicity

Named after a 2003 incident where Barbara Streisand’s attempts to suppress photographs of her Malibu home resulted in those images being seen by millions, the infamous “Streisand Effect” describes the unintended consequence where efforts to hide or censor information lead to its wider dissemination.

This phenomenon has been extensively studied by internet researchers. When authorities block access to a service like ProtonMail or a website, several counterproductive effects occur:

  1. Immediate attention amplification: News of the block spreads quickly through social media and tech publications, drawing attention to the very service authorities wanted to suppress.
  2. Search interest spike: Research from the Berkman Klein Center for Internet & Society at Harvard University found that Google searches for blocked content typically increase by 200-300% in the days following blocking actions.Harvard - New Berkman Klein Center study examines global internet censorship
  3. Technical education: Blocking motivates users to learn circumvention techniques they might never have explored otherwise, creating a more technically sophisticated user base.
  4. Martyrdom effect: Blocked services often gain sympathy and support, particularly from digital rights advocates and privacy-conscious users who view them as victims of censorship.

The Spanish Cloudflare block and Indian ProtonMail block are likely to repeat this pattern. For example, when Turkey blocked Twitter in 2014, Turkish Twitter usage actually increased by 138% in the immediate aftermath as users found and shared workarounds.

[link] 3. Splash Damage: The Unavoidable Collateral Impact

Internet censorship is not surgical—it’s more akin to carpet bombing in the digital space. The internet’s infrastructure is built on shared resources, interconnected systems, and layered services. This architecture makes precise targeting virtually impossible. When authorities block infrastructure providers like Cloudflare or entire communications platforms like ProtonMail, they disrupt access to an enormous range of legitimate services. This affects journalists communicating with sources, businesses protecting sensitive communications, educational institutions, healthcare providers, and countless other legitimate users. The Internet Society’s comprehensive 2017 report on content blockingInternet Society - Internet Shutdowns and Content Blocking not the answer found that all major censorship techniques produce substantial collateral damage:

This technical reality means there is no “clean” way to implement censorship without affecting legitimate users and services—a fundamental architectural limitation of the internet that cannot be engineered away.

[link] 4. Technological Arms Race: The Never-Ending Game of Cat and Mouse

Internet censorship initiates a technological arms race with a predictable pattern:

  1. Authorities implement a censorship technique
  2. Services and users develop countermeasures
  3. Authorities respond with more sophisticated censorship
  4. Services and users deploy more advanced circumvention

This pattern has played out repeatedly across decades of internet governance attempts worldwide. When messaging apps are blocked, they implement domain fronting techniques. When websites are censored, mirror sites proliferate. When email services are restricted, encrypted alternatives emerge.

Research from the Oxford Internet Institute’s Internet Policy ObservatoryMIT Press - Access Controlled has documented how this technological evolution consistently outpaces censorship efforts:

This asymmetric dynamic heavily favors the circumvention side, making censorship a temporary solution at best and often merely symbolic after initial implementation.

[link] History Repeats Itself:
Case Studies in Failed Censorship

History provides ample evidence that website blocking rarely achieves its intended goals:

[link] Turkey’s YouTube Ban (2007-2010)

When Turkey blocked YouTube for insulting content related to Atatürk, network traffic analysis revealed that YouTube usage actually increased by 33% during the ban. According to research published in the International Journal of Communication, approximately 70% of Turkish internet users learned to use DNS redirection or proxy services for the first time. Turkish users rapidly shared circumvention techniques through text messages and forums, with an estimated 4.5 million Turkish users regularly accessing YouTube despite the official block. By the ban’s end, VPN usage in Turkey had increased from under 10% to over 30% of internet users.

[link] The UK’s Piracy Site Blocks (2012-present)

When the UK implemented court-ordered ISP blocks of The Pirate Bay in 2012, research by the University of AmsterdamUvA-DARE - Global Online Piracy Study, (Page 27-28) found that P2P traffic dropped initially but normalized to pre-block levels within just 7 days. Traffic analysis showed that within one month, there were over 300 functioning Pirate Bay proxy sites being used by UK users. Perhaps most telling: during the first year of blocking, the number of UK-based Pirate Bay users increased by approximately 12%, despite the “block.” Technical monitoring revealed that users rapidly shifted to using VPNs, proxy services, alternative DNS servers, Tor, and mirror sites.

[link] Russia’s Telegram Blocking Attempt (2018)

Russia’s communications regulator Roskomnadzor blocked over 19 million IP addresses in an attempt to disrupt Telegram. Despite this massive action, Telegram’s usage in Russia dropped by only 7% for approximately three days before exceeding pre-block levels. Within two weeks, Telegram had gained approximately 500,000 new Russian users. To circumvent the block, Telegram implemented domain fronting techniques using Google and Amazon cloud services, prompting Russia to inadvertently block millions of innocent websites and services. By the time Russia officially abandoned the blocking effort in 2020, Telegram had nearly doubled its Russian user base to 30 million.

[link] China’s Great Firewall vs. VPN Usage

China has invested an estimated $6-10 billion annually in its censorship infrastructure—the most sophisticated system in the world. Despite this enormous investment, studies by Citizen Lab Report Citizen Lab Report - Search Monitor Project: Toward a Measure of Transparency show that between 29-35% of Chinese internet users regularly employ VPNs to bypass censorship. China’s technical cat-and-mouse game includes deep packet inspection that can detect and block many VPN protocols, yet new obfuscation techniques like Shadowsocks (created by a Chinese programmer in 2012) are continuously developed and widely used. Even after China criminalized unauthorized VPN usage in 2018 with penalties of up to $2,200, VPN usage has continued to grow at approximately 5-7% annually.

[link] Noble Intentions, Same Technical Failures:
Censorship Under Different Pretexts

Not all censorship attempts are made to suppress political opposition or protect business interests. In recent years, governments have increasingly justified internet restrictions as necessary to combat misinformation or protect “national interests.” These efforts face the same technical limitations:

  1. Brazil’s Election Misinformation Efforts (2018-2022): During Brazil’s contentious elections, the Superior Electoral Court ordered the blocking of dozens of news sites and social media accounts accused of spreading election misinformation. Despite deploying sophisticated content filtering, technical analysis showed that within 48 hours, over 85% of the blocked content had migrated to alternative domains, Telegram channels, and WhatsApp groups. Court documents revealed that for every account blocked, an average of 3.7 mirror accounts appeared elsewhere, often with larger audiences due to the “forbidden information” appeal.

  2. Turkey’s “National Security” Blocks (2016-present): Following the 2016 coup attempt, Turkey blocked over 240,000 websites and 150,000 URLs under national security provisions. Network measurement studies by the Internet Society found that despite this massive effort, politically sensitive content saw a 42% increase in circulation through encrypted messaging apps and VPN-protected social media. By 2020, Turkey had the second-highest VPN adoption rate globally at 32% of internet users—a direct consequence of the censorship regime.

  3. India’s “Public Order” Shutdowns: India leads the world in regional internet shutdowns, implementing over 400 since 2016, often justified by preventing the spread of rumors that could lead to violence. Research from the Internet Freedom Foundation found that during shutdowns, misinformation actually increased by 33-41% in affected regions as legitimate news sources became inaccessible while rumors spread through SMS and offline networks. The technical inability to selectively block only “harmful” content led to blanket approaches that proved counterproductive.

  4. South Korea’s “Fake News” Regulations: South Korea’s attempts to algorithmically identify and block “fake news” through its comprehensive filtering system demonstrated the technical impossibility of accurate content classification. A 2021 Seoul National UniversitySeoul National University - Fact-Checking and Audience Engagement study found the system had a 37% false positive rate—blocking legitimate news and commentary—while missing 42% of content that violated the same standards but used simple evasion techniques like image-based text or deliberate misspellings.

These examples demonstrate that regardless of intention—whether preventing harm from misinformation or protecting alleged national interests—the technical limitations of internet censorship remain constant. Well-intentioned efforts face the same circumvention techniques, collateral damage problems, and ultimate ineffectiveness as more obviously self-interested censorship attempts. The internet’s distributed architecture fundamentally resists centralized content control, regardless of the justification behind that control.

For more comprehensive reading on Internet freedom and accessibility to political rights and civil liberties, by country, check out Freedom House - Country Scores.

[link] Beyond Censorship:
Addressing Root Causes with Technical Solutions

Effective approaches to online problems typically address underlying issues rather than symptoms:

  1. Market-Based Solutions to Piracy: Research consistently shows that improving legal access to content at reasonable prices dramatically reduces piracy. When Netflix entered new markets, piracy rates measurably declined, according to a 2018 study by the European Commission’s Joint Research Centre.EU JRC - Online Copyright Enforcement, Consumer Behavior, and Market Structure This study found that for every 4.7% increase in legal streaming service subscriptions, there was a 3.1% decrease in visits to illegal streaming sites.

Spotify’s expansion across Europe similarly correlated with significant decreases in music piracy. When content is:

…the incentive to seek illegal alternatives naturally diminishes.

  1. Targeted Enforcement for Abusive Communications: Rather than blocking entire platforms like ProtonMail, digital forensics and international legal cooperation have proven more effective against targeted abuse. Email headers, server logs, and digital footprints can often identify perpetrators even when they use privacy-focused services.

Law enforcement agencies with proper training and resources can work through established legal channels such as Mutual Legal Assistance Treaties (MLATs) to request specific information from service providers about abusive accounts, all without disrupting service for legitimate users.

  1. Collaborative Content Moderation Frameworks: For harmful content issues, multi-stakeholder approaches that bring together platforms, civil society, and governments to establish transparent content moderation standards have shown more long-term success than blocking. The Global Internet Forum to Counter Terrorism (GIFCT) demonstrates how cross-platform cooperation can address specific problematic content while preserving overall access and functionality.

[link] Conclusion:
Technical Reality vs. Political Expediency

The technical reality of internet architecture makes censorship fundamentally flawed as a policy approach. It resembles trying to stop a river with a chain-link fence—water simply flows around it, while debris gets caught.

When policymakers and authorities implement internet censorship, they’re often choosing political expediency over technical effectiveness. Censorship appears decisive and creates the impression of immediate action—a press release can announce that something has been “blocked” and the problem “solved.” This political expediency—prioritizing the appearance of action over actual results—explains why censorship remains popular despite decades of technical evidence demonstrating its ineffectiveness.

What we consistently observe across different countries and contexts is that internet censorship creates:

The gap between political expediency and technical reality continues to widen. While blocking ProtonMail or Cloudflare might satisfy short-term political needs to appear “tough on crime” or “protective of intellectual property,” the technical architecture of the internet ensures these measures will fail to achieve their stated objectives while causing substantial collateral damage.

As we navigate complex digital challenges like piracy, harmful communications, and content moderation, effective approaches must work with, rather than against, the internet’s technical architecture. Solutions that address economic incentives, create proper accountability mechanisms, and establish collaborative governance frameworks have consistently outperformed censorship in both effectiveness and minimizing collateral damage.

The recent censorship attempts in various countries are likely to follow the same well-documented pattern we’ve seen for over two decades: temporary disruption followed by widespread circumvention, with legitimate users bearing the greatest burden. This isn’t a political conclusion—it’s what the technical evidence consistently demonstrates.

Got any questions or comments? Drop by on my e-mail!