Think and Save the World

What The Internet Was Supposed To Do For Unity And What Happened

· 13 min read

The Original Promise: A Brief History Of Techno-Utopianism

The utopian vision of the internet did not emerge from nowhere. It had a coherent intellectual genealogy, rooted in the specific cultural moment of its origin.

The early internet was built by academics, government researchers, and hackers — a community with specific values: openness, sharing, meritocracy of ideas, distrust of hierarchy. The culture of early ARPANET and the bulletin board systems of the 1980s was deeply collaborative. Information was shared freely because information was the point. Commerce was, initially, explicitly prohibited on ARPANET.

When the commercial internet emerged in the early 1990s, this culture collided with capital. The collision produced a mythology that has never quite resolved its contradictions: the idea that you could build a commercial internet that retained the values of the pre-commercial internet. That you could have free access, global connection, and democratic information — and also profit from it.

Stewart Brand's famous aphorism — "information wants to be free" — is almost always quoted incompletely. The full quote is: "Information wants to be free. Information also wants to be expensive. That tension will not go away." The tension never went away. We just pretended it had.

John Perry Barlow's "Declaration of the Independence of Cyberspace" was a serious political document, not just rhetoric. Written in 1996, it articulated a vision of digital space as categorically different from physical space — ungovernable by states, self-organizing by communities, inherently democratizing. "We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity."

This vision was partly realized. Dissidents in authoritarian states do use the internet to organize and publish. Citizens in countries with state-controlled media do access alternative information. The Arab Spring, for all its complexity and tragic aftermath, demonstrated that social media could coordinate mass political action at speed that was previously impossible.

But Barlow's declaration missed something fundamental: power doesn't disappear in digital space. It concentrates differently. And the people who built the infrastructure of the commercial internet were not building a public commons. They were building private property.

The Architecture Of Attention Extraction

To understand what went wrong, you have to understand the specific business model that came to dominate the internet: advertising supported by surveillance.

The model works like this: you provide a service for free (search, social networking, email, video). You monetize by selling advertising. Advertisers pay based on attention — the number of people who see their ads and how targeted those ads can be. Targeting requires data about users: their interests, behaviors, relationships, locations, purchasing history, political views. The more data you have, the more precisely you can target, the more you can charge advertisers.

This model has an internal logic that is completely coherent and completely incompatible with human flourishing. Every decision made within this model flows from the same constraint: maximize the time users spend on the platform, maximize the data collected about users, maximize the targeting precision that can be sold to advertisers.

Maximizing time on platform requires maximizing engagement. Engagement research, conducted internally by every major platform and externally by academic psychologists, consistently shows the same results: negative emotions drive more engagement than positive ones. Fear, anger, and disgust produce more clicks, more shares, more comments than contentment, joy, or calm. Content that confirms existing beliefs while making outgroup members look threatening generates more engagement than nuanced, balanced information.

This is not speculation. It is documented. The Facebook whistleblower disclosures, the internal Twitter research published after the acquisition, the YouTube algorithm audits, the academic studies of recommendation systems — all point to the same conclusion. The systems designed to maximize engagement systematically amplify division, outrage, and tribalism, because these are the emotional states that drive the most time on platform.

Shoshana Zuboff's framework of "surveillance capitalism," developed in her 2019 book of the same name, identifies this as a fundamentally new form of economic logic — one that doesn't just commodify human behavior but actively shapes it to produce predictable patterns that can be packaged and sold to advertisers and, by extension, to political actors. Cambridge Analytica was not an aberration. It was the logical extension of what every major advertising-supported platform does, applied to electoral manipulation.

Filter Bubbles Are More Complicated Than Eli Pariser Said

The filter bubble concept, introduced by Eli Pariser in 2011, has been enormously influential but is also, in some ways, too simple. The subsequent decade of research has complicated the picture.

Some studies suggest that ideological segregation online is not dramatically worse than ideological segregation offline — that people's social networks, in person and online, have always been relatively homogeneous. The filter bubble may not be the primary driver of political polarization.

What the research more consistently shows is something slightly different: algorithmic amplification systematically surfaces the most extreme content from the other side. You may have a diverse network in terms of political identity, but the content you see from political outgroups has been filtered to maximize your outrage response. You're not in a bubble from outgroup content — you're in a funhouse mirror that shows you the outgroup at its worst.

This distinction matters because the policy response is different. If the problem is purely segregation, the solution is cross-cutting exposure. But if the problem is that cross-cutting exposure is being weaponized to maximize hostility, then more exposure to the other side, without changing the algorithmic context, makes things worse. You need to change how outgroup content is selected and presented, not just whether it appears.

Chris Bail's research at Duke, published in "Breaking the Social Media Prism" (2021), found that showing people more content from the opposing political side on Twitter actually made them more extreme, not less. The mechanism: the content they saw was algorithmically selected to be maximally outrage-inducing, so exposure increased hostility. The architecture of the platform transformed diversity of exposure into an engine of radicalization.

The radicalization pathway that's best documented runs through YouTube. Kevin Roose's New York Times investigation documented how YouTube's recommendation algorithm consistently pushed viewers toward more extreme content — not because extreme content was what they sought, but because extreme content generated more watch time, and watch time was what the algorithm maximized. Users who started with mainstream political content were systematically nudged toward progressively more radical content through a chain of "up next" recommendations.

This is a structural feature, not a bug. The algorithm works as designed. The design is the problem.

The Disinformation Ecosystem: What Infrastructure Enables

The disinformation ecosystem that now saturates the internet is not primarily a problem of human irrationality. It's a problem of incentive architecture that makes the creation and spread of false information more rewarding than the creation and spread of true information.

False information, on average, spreads faster and farther than true information. This finding, documented in a 2018 study in Science by Vosoughi, Roy, and Aral analyzing twelve years of Twitter data, is one of the most replicated findings in digital communication research. False information is more novel, more emotionally arousing, and more identity-confirming than accurate information, which tends to be more nuanced and less immediately gratifying. The emotional architecture of human attention is exploited by false information, and the algorithmic architecture of social platforms amplifies this exploitation.

The political economy of disinformation is also important. Creating viral false content is cheap. Fact-checking it is expensive. Publishing a misleading study that serves a corporate or political interest costs a fraction of the cost of conducting the peer-reviewed research that debunks it. The debunking never catches up with the original claim.

State actors figured this out and operationalized it. The Internet Research Agency's work during the 2016 U.S. election was not primarily about creating persuasive pro-Russia or pro-Trump content. It was about amplifying division — posting inflammatory content on both sides of American political debates, identifying existing fracture lines and widening them. The goal wasn't a particular political outcome. The goal was to deepen mutual distrust among Americans. The tools that capitalist platform companies had built to maximize engagement were the perfect instrument for this.

Platform Monopoly And The Death Of The Alternatives

The specific features of digital markets have produced platform monopolies that are historically unprecedented in their reach and durability.

Network effects — the feature of communication platforms where each additional user increases the value of the platform for all existing users — produce winner-take-all dynamics. The social network with the most users is the most valuable social network, which attracts more users, which makes it more valuable. This dynamic produces natural monopoly tendencies that are stronger than in most industries.

Combined with the surveillance capitalism business model — which rewards scale because more data means better targeting — and the capacity to acquire competitors before they threaten market position (Instagram and WhatsApp for Facebook, YouTube for Google), the result is a communication infrastructure controlled by a small number of private actors with limited public accountability.

This matters for the unity question because it means that the infrastructure of human connection is governed by private profit motives rather than public interest. A public utility governing water or electricity is accountable to regulators and ultimately to citizens. A social media platform is accountable to its shareholders. When the interests of shareholders and the interests of public connection diverge — and on every major platform they have diverged, consistently, in favor of engagement over wellbeing — there is no mechanism to correct the outcome.

The federal antitrust cases against Google and Meta that advanced in 2024 are the first serious regulatory attempts to address this structural problem in the United States. The EU's Digital Services Act, which came into force in 2024, is a more comprehensive attempt to impose public interest accountability on large platforms. Neither is sufficient by itself. Together with parallel regulatory efforts in India, Brazil, Australia, and elsewhere, they represent the beginning of a global reckoning with what happens when you privatize the infrastructure of human communication.

What Actually Works: Evidence From Deliberative Democracy And Alternative Architectures

Within the general disaster, there are working examples of digital communication designed around human connection rather than attention extraction. They're worth examining not as nostalgia but as proof of concept.

Deliberative democracy platforms. Tools like Pol.is, developed in Taiwan and used by the government's digital minister Audrey Tang for public consultations, are specifically designed to find areas of consensus rather than amplify conflict. The algorithm surfaces statements that receive support across otherwise opposed groups — statements that transcend existing tribal divisions. It is mathematically optimized to find unity rather than division. It works. Taiwan used it to develop consensus policy on ride-sharing regulation and has applied it to dozens of public consultations. The platform design determines the political outcome.

Wikipedia. One of the internet's success stories, consistently overlooked in discussions of platform failure. Wikipedia is a collaborative knowledge project governed by community norms rather than engagement metrics. It is not perfect — it has well-documented problems with editor demographics and representation of certain topics. But it is a functional example of large-scale digital commons governance, run by a nonprofit with a mission of shared knowledge rather than a corporation with a mission of profit. The norms that make it work — verifiability, neutral point of view, assuming good faith — are design choices. They produce a different outcome than Facebook's design choices.

Mastodon and the Fediverse. The federated social network model, in which independent servers run compatible software and can communicate across instances, is a technical architecture designed to prevent monopoly concentration. No single corporation owns Mastodon. No single algorithm determines what content you see. Community governance varies by instance. The result is imperfect — smaller scale, more technically complex, less polished. But it demonstrates that the monopoly architecture is a choice, not a technical necessity.

Local digital infrastructure. The counterintuitive finding is that digital tools work best for connection when they're embedded in existing physical community. Nextdoor works better in neighborhoods where people already have some face-to-face relationships. Local Facebook groups that organize around concrete, in-person activities (mutual aid, neighborhood events, local civic issues) generate more genuine connection than groups organized around abstract identity. The technology serves the community rather than substituting for it.

The Technical, Legal, And Social Changes Required

What would it actually take to rebuild digital infrastructure for human connection? A framework in three layers:

Technical layer. Algorithmic recommendation must be reoriented away from engagement maximization. This is technically straightforward — it's a change to the objective function the algorithm optimizes for. The difficulty is that no current platform will do this voluntarily because it would reduce engagement metrics and therefore advertising revenue. Technical mandates through regulation — requiring that recommendation systems be auditable, that users be able to opt into non-engagement-optimized feeds, that platforms demonstrate their algorithms do not systematically amplify hate speech or disinformation — are the mechanism.

Interoperability mandates are equally important. If users could take their social graph from Facebook to a competing platform without losing their connections — the way you can take your phone number from one carrier to another — the monopoly lock-in would weaken. Platforms would have to compete on quality of experience rather than size of existing network.

Legal layer. Section 230 of the Communications Decency Act, which gives U.S. platforms immunity from liability for user-generated content, was designed in 1996 for a different internet. It has allowed platforms to scale content they profit from without accountability for its harms. Reform — not repeal, which would create a different set of problems — should condition immunity on meaningful content moderation, algorithmic transparency, and demonstrable harm reduction.

Antitrust enforcement against platform monopolies is necessary but not sufficient. Breaking up Meta into Facebook, Instagram, and WhatsApp would create three companies with the same surveillance capitalism business model rather than one. Structural change requires addressing the business model, not just the market concentration.

Data minimization requirements — limiting what platforms can collect and how long they can retain it — would weaken the targeting infrastructure that makes surveillance capitalism work. The EU's GDPR established the principle. Enforcement has been inadequate. Stronger enforcement and equivalent legislation in the United States would change the economic calculus for platform companies.

Social layer. Legal and technical changes are necessary but not sufficient without corresponding shifts in how people relate to digital platforms. Digital literacy — genuine understanding of how recommendation algorithms work, how advertising targeting works, how engagement metrics shape content — is not currently taught at scale in most educational systems. It should be.

Community-level practices around digital technology — norms about what is appropriate to share, what attention to give platforms, how to create genuine connection rather than audience performance — are forming organically but unevenly. These norms can be cultivated deliberately through community institutions: schools, religious organizations, civic groups, neighborhood associations. The question for communities is not whether to use digital tools but how to use them in service of relationship rather than as a substitute for it.

The Community Scale: What This Means For Your Neighborhood

At the community scale — where this article lives — the practical question is how to use digital infrastructure to build rather than substitute for real relationship.

The evidence suggests a clear pattern: digital tools enhance existing community relationships; they do not create community where none exists. The most effective uses of social media for community building are ones where the platform is a coordination mechanism for in-person activity — not a replacement for it.

This means the community-scale intervention is not primarily technical. It's relational: investing in the face-to-face relationships and community institutions that give digital tools something real to amplify. The neighborhood mutual aid network that pre-existed COVID and therefore survived it. The civic organization that uses a group chat to coordinate in-person events. The faith community that uses digital communication to connect members who then see each other weekly.

The internet was supposed to dissolve geography and make connection global. The evidence is that it works best locally, embedded in physical community, used as a coordination layer on top of real relationship rather than as a substitute for it. This is not a failure of the technology. It's a clarification of what the technology is good for.

The mistake was not building the internet. The mistake was building the current internet on a business model that is structurally incompatible with human connection, and then mistaking scale for depth.

Exercises

Algorithm audit. Spend one week keeping a log of the content you encounter on your primary social platform. Categorize it: does it make you feel connected or disconnected from other people? Does it portray outgroup members as complex humans or as threats? Does it leave you more or less trusting of the world? At the end of the week, you have data about what the algorithm is actually doing to your sense of human connection.

The timeline fast. For two weeks, stop consuming algorithmically curated feeds. Read specific things you have chosen to find, from sources you have specifically selected. Notice what changes in your perception of other people, of politics, of the world. This is not a call for permanent abstinence — it's a diagnostic.

Digital community mapping. Map the digital communities you currently participate in. For each one: Does participation in this community lead to in-person relationship? Does it expose you to genuine difference or reinforce existing beliefs? Does it leave you feeling more or less connected to your fellow humans? Use this map to make intentional choices rather than default ones.

Build something analog. Identify one community need in your neighborhood that could be addressed through face-to-face relationship — a skill share, a tool library, a regular neighborhood meal. Build the human infrastructure first. Then use digital tools to coordinate it.

Citations And Sources

- Shoshana Zuboff, The Age of Surveillance Capitalism, PublicAffairs, 2019 - Eli Pariser, The Filter Bubble: What the Internet Is Hiding From You, Penguin Press, 2011 - Chris Bail, Breaking the Social Media Prism, Princeton University Press, 2021 - Soroush Vosoughi, Deb Roy, Sinan Aral, "The spread of true and false news online," Science, Vol. 359, No. 6380, 2018 - John Perry Barlow, "A Declaration of the Independence of Cyberspace," 1996 - Frances Haugen testimony before U.S. Senate Commerce Committee, October 2021 - Wall Street Journal, "The Facebook Files," September–October 2021 - Kevin Roose, "The Making of a YouTube Radical," New York Times, June 8, 2019 - Audrey Tang and the use of Pol.is in Taiwan: multiple sources, see Taiwan Digital Minister's public writings and presentations - Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other, Basic Books, 2011 - EU Digital Services Act, entered into force August 2023 - FTC v. Meta Platforms, Inc., filed 2020; Google antitrust cases, 2023–2024 - Tim Wu, The Master Switch: The Rise and Fall of Information Empires, Knopf, 2010 - Cathy O'Neil, Weapons of Math Destruction, Crown, 2016 - Stewart Brand, The Media Lab: Inventing the Future at MIT, Viking, 1987 (source of the "information wants to be free" quote in fuller context)

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.