Chapter 1: From Tahrir Square to Times Square: When the Machine Turned Against Democracy
Click the link to read the introduction to our new series, The Digital Coup.
The Utopian Misreading
In June 2010, Egyptian police beat a young man named Khaled Saeed to death in Alexandria. The official narrative - that he had choked on drugs while resisting arrest - might have held, as such narratives often do in authoritarian states. But the photographs that appeared online told a different story. They showed a face destroyed beyond recognition, a jaw shattered, teeth knocked out. This was not an accidental death; it was murder (Ghonim, 2012).
What happened next seemed, at the time, to vindicate every utopian promise ever made about the internet. Wael Ghonim, a Google executive, created a Facebook page called “We Are All Khaled Said.” Within months, it had hundreds of thousands of followers. On January 25th, 2011, the page issued a call to gather in Tahrir Square. Tens of thousands came. For eighteen days, the world watched a revolution unfold in real time, mediated through the very platforms that had organised it. When Hosni Mubarak finally fell, the narrative seemed complete: social media had toppled a dictator (Britannica, 2024).
This interpretation was not confined to Egypt. Across 2011, similar stories emerged from Tunisia, where Mohamed Bouazizi’s self-immolation sparked a revolution that drove Ben Ali from power, and from New York, where Occupy Wall Street used Twitter and Tumblr to spread its message of economic inequality. The pattern appeared universal: give people the tools to connect and communicate, and democratic change would naturally follow. Western media christened it the “Twitter Revolution,” the “Facebook Revolution.” Academics wrote papers celebrating how social media had empowered citizens against authoritarian regimes. Technology executives gave TED talks about changing the world (Howard et al., 2011).
The intellectual framework underpinning this optimism had deep roots. It drew from the cyber-libertarian ideology that had emerged from Silicon Valley in the 1990s, which held that the internet was inherently democratising, that information wanted to be free, and that hierarchical power structures would inevitably crumble in the face of networked communication. This was the gospel according to figures like John Perry Barlow, whose 1996 Declaration of the Independence of Cyberspace had proclaimed that governments of the industrial world had no sovereignty in the digital realm. The Arab Spring seemed to be the ultimate vindication of this worldview.
But this reading fundamentally misunderstood the nature of the technology. What looked like liberation was actually revealing a deeper truth about networked platforms: they are politically neutral in the most dangerous sense. The same tools that could organise a protest could also track the protesters. The same network that spread messages of hope could spread messages of fear and hate with equal efficiency. The same algorithms that showed videos of protests in Tahrir Square could show conspiracy theories about those protests. The technology didn’t care which was true; it cared only about engagement. This neutrality, this indifference to truth, would prove to be not a bug, but the defining feature of the system.
The mistake was in confusing the medium with the message, in believing that the technological capacity for horizontal communication would automatically produce democratic outcomes. What the cyber-utopians failed to grasp was that these platforms were not neutral public squares, but commercial enterprises designed to maximise user engagement for advertising revenue. The architecture of these systems - the algorithms that determined what content users saw, the metrics that measured success, the business models that funded their operation - was optimised for emotional arousal, not informed deliberation. The Arab Spring had succeeded not because social media was inherently democratising, but because, in that particular moment, the goals of the protesters happened to align with the platforms’ engagement metrics. That alignment would not last.
The Authoritarian Counter-Revolution
While Western observers were celebrating digital liberation, authoritarian regimes were learning a very different lesson. The images from Tahrir Square and Zuccotti Park were not inspiring hope in Moscow and Beijing; they were provoking alarm. Vladimir Putin, watching tens of thousands of Russians take to the streets of Moscow in December 2011 to protest rigged parliamentary elections, saw not the future of democracy but a new form of Western subversion. The protests, organised on Facebook and VKontakte, represented an existential threat to his power (The Guardian, 2012).
Putin’s paranoia was not entirely unfounded. He genuinely believed (or at least claimed to believe) that Hillary Clinton and the U.S. State Department were behind the protests, using social media as a tool of regime change. Whether this belief was sincere or cynically deployed for domestic consumption is less important than what it produced: a determination to master these platforms before they could be used against him again.
Putin’s response revealed a sophisticated understanding of how these platforms actually worked. Rather than simply banning them, which was the reflexive authoritarian response that China had chosen, he chose to master them. If the internet could be used to create a reality in which he was a corrupt autocrat, it could also be used to create a reality in which he was a strong leader besieged by foreign enemies. The key insight was that social media platforms were not inherently democratising; they were inherently chaotic. And chaos could be weaponised.
In a nondescript office building at 55 Savushkina Street in St. Petersburg, the Internet Research Agency (IRA) was born. This was not traditional propaganda, which sought to convince people of a particular truth. It was something far more sophisticated: an industrial operation designed to manufacture not just pro-Kremlin messages but discord itself. IRA employees, working in shifts around the clock, created thousands of fake personas. They posed as American patriots and Black Lives Matter activists, as Christian conservatives and Muslim extremists. They spread whatever topics would cause the most division, the most anger, the most confusion (CNN, 2023; Spyscape, n.d.).
The operation’s sophistication lay in its understanding of American fault lines. IRA operatives didn’t need to invent divisions; they simply needed to amplify existing ones. They created Facebook groups for both sides of contentious issues - immigration, gun rights, racial justice - and then used these groups to organise real-world events designed to bring opposing sides into conflict. In one particularly brazen operation, they organised both a pro-Islam rally and an anti-Islam protest in the same location in Houston, then sat back and watched Americans confront each other in the streets.
Lyudmila Savchuk, a former IRA employee who later became a whistleblower, described the operation’s methodology. Workers were given quotas - so many posts per day, so many comments, so many likes. They were trained to mimic authentic American writing styles, complete with slang and deliberate spelling mistakes. They studied American culture through television shows and social media to make their personas more convincing. Their goal, she explained, was not to win arguments but to make argument itself impossible; to destroy the very concept of shared reality (The Guardian, 2015).
This was the authoritarian awakening: the realisation that the power of social media lay not in its ability to connect people, but in its ability to isolate them. The platforms could trap users in echo chambers of their own beliefs, feed them constant outrage and fear, and turn them against their neighbours. What Silicon Valley had built as a tool for connection could be repurposed as a machine for fragmenting society. Putin and his strategists understood this before the platforms’ creators did.
The implications of this discovery extended far beyond Russia. Putin’s model would be studied and replicated by authoritarian regimes around the world. But more importantly, it revealed a fundamental vulnerability in democratic societies: their openness, their commitment to free speech, their pluralism - all the values that made them democratic - also made them uniquely susceptible to this form of attack. The very freedoms that democracies cherished could be weaponised against them.
Subscribe to Notes From Plague Island and join our growing community of readers and thinkers.
The Business Model of Manipulation
The authoritarians, however, were only exploiting a vulnerability that had been built into the system from the beginning. While Putin was learning to use social media for political ends, Silicon Valley had already weaponised it for commercial ones. The business model of surveillance capitalism rested on a simple premise: the more you knew about someone, the more you could influence their behaviour, and the more you could sell to advertisers (Zuboff, 2019).
To make this system work, the platforms needed to maximise engagement. They needed to keep users scrolling, clicking, watching. The algorithms they built to achieve this were not neutral tools designed to show people what they wanted to see. They were behaviour modification systems designed to provoke the strongest possible emotional response. The engineers who built them believed they were connecting the world, democratising information, bringing people together. What they were actually building was a global-scale machine for amplifying human psychology’s worst tendencies (Crockett, 2017).
The algorithms learned quickly. Research would later demonstrate that lies travelled six times faster than truth on these platforms (Vosoughi, Roy, and Aral, 2018). This was not because users were particularly gullible or malicious, but because false information tended to be more novel, more surprising, and more emotionally arousing than accurate information. The algorithms, built for engagement, naturally favoured the false over the true.
Moral outrage was particularly effective: for every emotional word in a tweet, the chance of it being retweeted increased by 20 percent (Crockett, 2017). The platforms learned that the best way to keep people engaged was to show them content that confirmed their deepest fears, validated their darkest prejudices, and told them they were right while everyone else was wrong. This created a feedback loop: users who engaged with outrage-inducing content were shown more of it, which made them more likely to engage with even more extreme content, which changed what the algorithm showed them next. Over time, users were gradually radicalised without ever consciously choosing to be.
Consider the trajectory of Caleb Cain, a young American who in the mid-2010s began watching YouTube videos about video games. The platform’s recommendation algorithm, sensing his interest in anti-feminist content, began pushing him progressively further right. First came videos mocking feminists, then content about immigration dangers, then white nationalist material. Within a few years, Cain had been completely radicalised without ever leaving his bedroom. He was living in a reality constructed for him, piece by piece, by an algorithm manufactured for engagement, not truth (The New York Times, 2019).
Cain’s story is important not because it is unique, but because it is representative. The radicalisation pipeline that YouTube’s algorithm created was not designed to radicalise; it was designed to maximise watch time. Radicalisation was simply a side effect of that optimisation. The algorithm discovered that people who watched moderate political content would often watch more extreme content, and that people who watched extreme content would watch it for longer periods. From the algorithm’s perspective, this was success. From society’s perspective, it was a disaster.
This was the convergence that no one had anticipated: authoritarian actors discovering that the platforms built by Silicon Valley for profit were perfectly designed for political manipulation. The Machine didn’t need to be hacked or subverted. It just needed to be understood and exploited. The business logic of maximising engagement aligned perfectly with the political logic of sowing division. Angry people click more. Afraid people watch more. People who believe they’re under attack share more. The authoritarians and the tech companies had different goals, but they had discovered they were using the same playbook.
The platforms’ business model created what we might call a ‘vulnerability by design.’ The same features that made them profitable - the algorithmic curation, the personalisation, the optimisation for engagement - also made them exploitable. The IRA didn’t need to create new tools or develop new techniques; they simply needed to understand how the existing tools worked and use them more effectively than anyone else. They were, in a sense, the platforms’ most sophisticated users.
The Perfect Storm
By the end of 2015, all the elements were in place for what would follow. In Russia, a new generation of political technologists had perfected the art of using social media to destabilise democracies. In Silicon Valley, algorithms were becoming increasingly sophisticated at predicting and manipulating human behaviour. And in the minds of millions of users, a new kind of reality was taking shape: fragmented, polarised, and increasingly detached from any shared understanding of truth.
What made this moment so dangerous was that it represented not a conspiracy, but a convergence. The authoritarians hadn’t built The Machine; they had simply learned to drive it better than its creators. The tech companies hadn’t set out to undermine democracy; they had simply built a system that prioritised engagement over everything else, including truth. And the users hadn’t deliberately chosen to live in filter bubbles; they had simply responded to the incentives the platforms had created.
The tragedy of this moment is that it was entirely predictable, yet almost no one predicted it. The warning signs were there: the platforms’ business models incentivised engagement over truth, the algorithms amplified extreme content, the architecture enabled manipulation. But the cyber-utopian narrative was so seductive, so deeply embedded in Silicon Valley’s self-conception, that these warnings were dismissed or ignored. The platforms’ creators genuinely believed they were building tools for liberation, even as they were building the infrastructure for a new form of control.
This convergence would have consequences that extended far beyond what anyone imagined in those early, optimistic days of 2011. The tools that were supposed to democratise information had created an information ecosystem where lies travelled faster than truth. The platforms that were supposed to connect people had isolated them in personalised realities. The technology that was supposed to empower citizens had given authoritarians a new weapon more powerful than anything in their previous arsenal.
The utopian dream of a connected world was about to collide with the reality of a world torn apart by the very tools meant to unite it. The digital coup was about to begin. But to understand how it succeeded, we must first understand how The Machine was built; how Silicon Valley’s venture capital model and its obsession with growth at any cost created the algorithms that would prove so easy to weaponise.
Next in the series: Chapter 2 will examine how The Machine was built – ‘The Gospel of Growth: How the Machine Was Built.’
Or support us with a one-off tip → Buy Me a Coffee
References
Alaimo, K., 2015. How the Facebook Arabic page “We Are All Khaled Said” helped promote the Egyptian revolution. Social Media + Society, 1(2). Available at: https://doi.org/10.1177/2056305115604854 [Accessed 17 November 2025].
Britannica, 2024. Egypt Uprising of 2011. Encyclopaedia Britannica. Available at: https://www.britannica.com/event/Egypt-Uprising-of-2011 [Accessed 17 November 2025].
CBS News, 2011. Occupy Wall Street uses social media to spread nationwide. CBS News, [online] 13 October. Available at: https://www.cbsnews.com/news/occupy-wall-street-uses-social-media-to-spread-nationwide/ [Accessed 17 November 2025].
CNN, 2023. Wagner chief admits to founding Russian troll farm. CNN, [online] 14 February. Available at: https://www.cnn.com/2023/02/14/europe/russia-yevgeny-prigozhin-internet-research-agency-intl [Accessed 17 November 2025].
Crockett, M.J., 2017. Moral outrage in the digital age. Nature Human Behaviour, 1(11), pp.769-771.
Democracy Now, 2011. The Revolution Will Be Live Streamed: Global Revolution TV, the Occupy Movement’s Video Hub. Democracy Now, [online] 18 November. Available at: https://www.democracynow.org/2011/11/18/the_revolution_will_be_live_streamed [Accessed 17 November 2025].
Ghonim, W., 2012. Revolution 2.0: The Power of the People Is Greater Than the People in Power. Boston: Houghton Mifflin Harcourt.
Howard, P.N., Duffy, A., Freelon, D., Hussain, M.M., Mari, W. and Mazaid, M., 2011. Opening closed regimes: what was the role of social media during the Arab Spring? Project on Information Technology and Political Islam. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2595096 [Accessed 17 November 2025].
Human Rights Watch, 2010. Egypt: Investigate Killing of Blogger. Human Rights Watch, [online] 11 June. Available at: https://www.hrw.org/news/2010/06/11/egypt-investigate-killing-blogger [Accessed 17 November 2025].
Mother Jones, 2011. We Are the 99 Percent Creators Revealed. Mother Jones, [online] 7 October. Available at: https://www.motherjones.com/politics/2011/10/we-are-the-99-percent-creators/ [Accessed 17 November 2025].
Spyscape, n.d. Inside Russia’s Internet Research Agency. Spyscape. Available at: https://www.spyscape.com/article/inside-russias-internet-research-agency [Accessed 17 November 2025].
The Guardian, 2012. Vladimir Putin accuses Hillary Clinton of encouraging Russian protests. The Guardian, [online] 8 December. Available at: https://www.theguardian.com/world/2012/dec/08/vladimir-putin-hillary-clinton-russia [Accessed 17 November 2025].
The Guardian, 2015. Inside the Kremlin’s hall of mirrors. The Guardian, [online] 2 April. Available at: https://www.theguardian.com/news/2015/apr/02/putin-kremlin-inside-russian-troll-house [Accessed 17 November 2025].
The New York Times, 2019. The Making of a YouTube Radical. The New York Times, [online] 8 June. Available at: https://www.nytimes.com/interactive/2019/06/08/technology/youtube-radical.html [Accessed 17 November 2025].
Tufekci, Z., 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven: Yale University Press.
Zuboff, S., 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs.



