Think and Save the World

What A Globally Thinking Population Would Demand Of Technology Companies

· 8 min read

The relationship between technology companies and democratic accountability is one of the defining governance challenges of the early 21st century. Technology companies have acquired unprecedented power over information environments, economic activity, and social organization at a speed that has dramatically outpaced the development of regulatory frameworks, public understanding, or coherent democratic governance. Understanding why this happened and what a thinking population would change is essential to understanding what kind of civilization we're building.

How The Accountability Gap Formed

The technology sector's extraordinary insulation from accountability is not primarily the result of political capture, though that has played a role. It's primarily the result of a genuine complexity asymmetry between the companies and the populations they serve.

The internet era produced systems of genuine novelty — distributed networks, algorithmic curation, data economies, platform markets — that required new conceptual frameworks to understand. The academic fields developing those frameworks (network economics, information theory, platform governance, algorithmic accountability) were working on multi-decade timescales. The companies were scaling on multi-year timescales. The gap between "what these systems do" and "what the public understands these systems do" grew rapidly and has never fully closed.

This complexity gap was compounded by several factors:

Early techno-optimism as default frame. The internet's early development was accompanied by a broadly shared narrative of democratization, access, and empowerment. This narrative was real and not entirely wrong — the internet did expand access to information and communication in genuinely transformative ways. But it served as a default charitable interpretation of everything that followed, creating cultural resistance to critical analysis of tech products even as those products' properties became much more complicated.

Platform structure obscures product nature. Social media platforms don't look like products in the traditional sense. They look like services — free tools that connect you to your friends. The product — the carefully curated, psychologically optimized information environment that shapes what you see, think, and feel — is not visible as a product. Understanding that you are the product (or more precisely, that your attention is the commodity sold to advertisers, and your data is the raw material used to train the targeting systems) requires seeing through the service interface to the business model beneath it. This is not an obvious move without the right conceptual tools.

Jurisdictional complexity. Global technology companies operate across jurisdictions with wildly different regulatory frameworks. This creates regulatory arbitrage opportunities — companies can locate revenue in low-tax jurisdictions, locate liability in permissive ones, and benefit from the fact that no single democratic government has jurisdiction over their global operations. The EU has led on technology regulation (GDPR, Digital Services Act, AI Act) but enforcement against global companies with large market power is genuinely difficult, and regulatory compliance has often produced minimally compliant bureaucratic processes that satisfy the letter while evading the spirit of requirements.

Speed differential. Technology companies innovate faster than regulatory processes can evaluate and respond. By the time a regulatory response to one generation of technology is complete, the companies have moved to the next generation. This is sometimes presented as an inherent property of the technology sector, but it's more accurately a property of the regulatory model: reactive governance (see law_2_417) applied to a domain that is changing faster than reactive governance can manage.

What A Globally Thinking Population Changes

The demands a thinking population would place on technology companies aren't primarily about legislation. Legislation matters, but it's downstream of political will, which is downstream of public understanding and expectation. The more fundamental change is in what populations understand about how these systems work and what they're therefore able to evaluate, demand, and refuse.

Let's be specific about each category of demand:

1. Algorithmic Transparency

The algorithmic systems that govern recommendation, curation, content moderation, search ranking, and credit scoring are the most consequential undemocratic decision-making systems in history. They shape what billions of people see, believe, and decide. They are governed by commercial optimization objectives set by private companies and implemented in systems that are, by default, opaque.

Transparency is often framed as a technical challenge — these systems are too complex to be made fully legible. This is partly true but largely wrong. The relevant transparency is not about source code. It's about:

- What inputs does the system use? (What signals determine what you see?) - What is the system optimizing for? (What is the objective function?) - What tradeoffs are built into the optimization? (What is being sacrificed to maximize the primary objective?) - What are the known failure modes? (What patterns of error does the system produce, and for which populations?)

A thinking population would require answers to these questions as a condition of operating. Not as requests, but as regulatory requirements with enforcement. The implicit argument that algorithmic systems are legitimately ungovernable because they're complex would not be persuasive to a population that understood how governance of complex systems works — through input/output accountability, third-party audit, and outcome monitoring, even when the internal mechanism isn't fully legible.

2. Honest Impact Reporting

Technology companies currently report business metrics — users, engagement, revenue — and do voluntary corporate social responsibility reporting that they control and frame. They do not report on the measurable harms their products produce, except when compelled by litigation or regulation.

Internal research at major social media companies has repeatedly found connections between platform use and negative mental health outcomes, particularly for adolescent girls. That research remained internal until it was leaked or disclosed in litigation. A thinking population would require that companies report on their own harm evidence in the same way that pharmaceutical companies are required to report clinical trial data — including negative results, including data that undermines the product's commercial case.

This is not an unprecedented requirement. We already do this for drugs, for financial products, for environmental pollutants. The argument that technology products should be exempt from equivalent disclosure requirements requires justifying why social media algorithms should receive less accountability than a lipid-lowering medication. A thinking population would find this justification implausible.

3. Data Governance As Civic Infrastructure

The current legal framework treats personal data as a commodity — something you own, that you trade away when you click agree, that the receiving party can then use, sell, and leverage as they see fit. This framework was never democratically deliberated. It emerged from a combination of industry lobbying, technical novelty, and regulatory lag, and it has produced a situation where the most valuable data resources in history — behavioral data, health data, social network data — are held by a small number of private companies with essentially no public accountability for how they're used.

A globally thinking population would recognize that data governance is a public interest question, not a private contract question. The analogy is not to commodities but to infrastructure — data flows through society the way water flows through cities, and the question of who controls those flows and to what ends is a question about how power is organized in society.

Specific demands this would produce: mandatory data portability (you can take your data and leave); mandatory data minimization (companies can only collect what they need to provide the service); opt-in consent for secondary uses; public interest licensing for data resources that have been built on population-scale collection; and mandatory data deletion on request with verifiable compliance.

4. AI Development Accountability

The development of artificial general intelligence — systems that can perform any cognitive task a human can perform — is the most consequential technological project in human history. It is currently being undertaken by a small number of private companies with minimal democratic oversight, varying commitments to safety, and significant commercial incentives to move fast and claim the market position of being first.

A thinking population would recognize this for what it is: civilizational-scale decision-making being made by private actors in the absence of public deliberation. The decisions being made now — about training data, about capability evaluation, about deployment thresholds, about safety research priorities — will shape the trajectory of AI development and therefore the trajectory of human civilization for the foreseeable future.

The demands of a thinking population would center on: - Mandatory safety evaluation frameworks with independent verification - International coordination mechanisms (like the treaties that govern nuclear weapons) rather than regulatory races to the bottom - Public interest representation in AI development governance rather than exclusive governance by developers and their investors - Transparency about capability assessments and failure modes before deployment - Accountability mechanisms when AI systems produce measurable harm

5. Platform Structure And Democratic Governance

The most fundamental demand a thinking population would make is structural: that platform companies operating at civilizational scale be governed as infrastructure rather than as private enterprises.

This doesn't mean nationalization. Infrastructure can be privately owned and publicly regulated — utilities, telecommunications, railroads, and airlines all follow this model. It means that the rules governing access, pricing, content policy, and algorithmic design would be subject to democratic oversight rather than private discretion.

The arguments against this are predictable: government involvement will stifle innovation, political control of platforms will enable censorship, regulatory processes move too slowly for tech. A thinking population can evaluate each of these:

Does regulation stifle innovation? Sometimes, but utilities have continued to innovate for a century under regulatory oversight. The question is not whether any regulation stifles any innovation but whether the specific regulatory frameworks under consideration produce net benefits or net costs. That's an empirical question.

Does oversight enable censorship? Public utility regulation doesn't give governments content control — it governs access, pricing, and technical standards. Democratic governance of AI systems doesn't require governments to control what AI systems say; it requires transparency and accountability for how they're designed and what their effects are.

Do regulatory processes move too slowly? This is an argument for improving regulatory processes, not for exempting the most powerful companies in history from accountability.

The Civilizational Stakes

Technology companies are building civilization's nervous system — the communication infrastructure, the information environment, the economic coordination mechanisms, the decision-support systems — and they're doing it without genuine democratic input from the civilizations being built.

The question is not whether to have powerful technology. Technology is humanity's primary mechanism for solving hard problems, and the hard problems of this century — energy, food, health, climate — will require more powerful technology, not less. The question is who governs the development and deployment of that technology, according to what values, subject to what accountability.

A globally thinking population answers that question differently than the current default. It does not accept that the complexity of technology exempts it from democratic governance. It does not accept that the speed of innovation makes accountability impossible. It does not accept that the private interests of technology companies are sufficiently aligned with human welfare to be trusted without oversight.

It asks the same questions of technology that it asks of any powerful institution: What is this actually for? Who benefits? Who bears the costs? Who decided, and why? What happens if this goes wrong? What are the mechanisms to correct it?

Those questions, asked consistently and in sufficient numbers, produce a different technology sector than the one we have. Not a less innovative one. A more accountable one. One that is powerful in the service of humanity rather than powerful at humanity's expense.

That's the demand a thinking civilization makes of its most powerful tools.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.