Think and Save the World

What Artificial Intelligence Governance Requires From A Thinking Global Population

· 6 min read

The governance of transformative technologies follows a recurring pattern. The technology arrives. Its full implications take ten to twenty years to become visible. By the time the public understands what's happening, the infrastructure is entrenched, the economic interests are organized, and the cost of meaningful course correction is very high. This happened with social media. It happened with financial derivatives. It is happening right now with AI, on a compressed timeline.

What makes AI different from previous technology governance challenges — and what makes the cognitive state of the global population uniquely decisive — is the combination of three properties: generality, speed, and opacity.

Generality: AI is not a specific technology. It's a class of techniques for building systems that optimize for objectives. Those techniques are applicable to essentially any domain where there's data and a defined goal. This means AI governance isn't one problem — it's every problem at once, in every sector simultaneously.

Speed: The development cycle is not measured in decades. It's measured in months. A regulatory body that takes three years to develop a standard is regulating last generation's technology. The only governance approaches that can keep pace are ones that are principles-based, flexible, and informed by a constituency that can update its understanding at something approaching the same rate as the technology.

Opacity: The most powerful AI systems are not interpretable. You cannot read a large neural network and explain why it produced a specific output the way you can read a piece of legislation and trace its reasoning. This means governance cannot rely on the traditional "look inside the box" approach to accountability. New accountability mechanisms — model audits, red-teaming requirements, incident reporting, interpretability research — are all necessary, and all require a public that understands why they matter.

The Democratic Accountability Problem

Let's be specific about what's broken. Democratic accountability for technology policy requires:

1. Citizens who understand the issue well enough to form preferences 2. Media capable of translating expert debate into public-accessible terms accurately 3. Politicians who understand it well enough to legislate meaningfully 4. Regulatory bodies with the expertise and independence to implement legislation 5. Civil society organizations that can represent public interest in technical standards-setting processes

None of these five conditions is fully met right now, anywhere in the world. The closest approximations — the EU's AI Act, the US AI Safety Executive Order, the UK's AISI — are frameworks built by relatively small expert communities with limited feedback from the populations they're meant to protect.

This isn't a criticism of those efforts. Given current conditions, they are genuine achievements. It is, however, a diagnosis of the structural limitation: expert governance of a public good, in the absence of an engaged public, tends toward the interests of the organized and the well-resourced. The organized and well-resourced in AI are predominantly the companies building it.

What A Thinking Population Enables

Let's be concrete about what changes when the population can reason about AI at a basic level.

Objective function literacy: The single most powerful concept in understanding AI risk is the objective function — what the system is actually optimizing for. If citizens understand that social media recommendation systems optimize for engagement (not wellbeing), and that engagement is empirically correlated with emotional arousal, outrage, and tribalism, they can evaluate regulatory proposals in terms of whether those proposals address the objective function. "Add more content warnings" doesn't change what the system is optimizing for. "Require platforms to demonstrate that their recommendation systems don't systematically increase anxiety and political polarization" does. A population with objective function literacy can demand the second type of regulation and reject the first as theater.

Bias and discrimination legibility: Algorithmic discrimination is one of the most significant near-term AI harms, and it's one of the hardest to govern precisely because it's not intentional. A hiring algorithm that's trained on historical hiring data will encode historical hiring patterns — which means it will reproduce the gender and racial composition of who got hired in the past, not who is qualified in the present. This is not engineering failure. It's a predictable consequence of training on biased data. Governing it requires understanding this mechanism, so that regulation targets data governance, evaluation requirements, and audit obligations — not just vague "fairness" mandates that can be satisfied by surface-level demographic balancing while the underlying mechanism remains intact.

Concentration of power as the core risk: The most underappreciated governance challenge in AI is not bias, not safety failures, not even autonomous weapons — it's concentration. Advanced AI systems require enormous compute, enormous data, and enormous engineering talent. These are not evenly distributed. The companies and governments that control them will control infrastructure that is, within a decade, more consequential than oil pipelines and financial systems. A population that doesn't think about this can't pressure governments to address it. A population that does think about it can demand antitrust frameworks, public compute infrastructure, open-model requirements, and international agreements that prevent AI from becoming the technical foundation of a new form of authoritarianism.

The International Dimension

AI governance is inherently international in a way that most technology governance hasn't been. The models are global. The data flows are global. The companies are multinational. And the geopolitical competition around AI — particularly between the US and China — is creating incentives for both sides to deprioritize safety in favor of speed.

This is a collective action problem at the civilizational scale. No single country can solve it unilaterally. International agreements — like those that govern nuclear weapons, chemical weapons, and to some degree the financial system — require political will at the domestic level in multiple countries simultaneously. That political will requires publics in those countries who understand what they're being asked to support and why.

A thinking global population is the only constituency that can generate demand for international AI governance agreements. Expert communities can design them. Governments can negotiate them. But they won't hold without public understanding and pressure, because every agreement requires domestic political support to implement and enforce.

The Alignment Problem as a Public Problem

The AI alignment problem — roughly, the challenge of ensuring that as AI systems become more capable, they remain oriented toward human values rather than diverging in pursuit of their own optimization targets — is often discussed as a purely technical challenge. It's not. Or rather, it won't be for long.

Alignment is partly technical, but it's fundamentally a question about which human values AI systems should be aligned to. Values are contested. They vary by culture, history, and circumstance. The decision about whose values get encoded into increasingly powerful AI systems is not a technical decision — it's a political and ethical one. And in a world where those systems will operate globally, affecting billions of people, it's a decision that should involve input from billions of people.

That requires those billions of people to understand the question well enough to engage with it. What does it mean to say an AI system has values? How are those values encoded? Who decides? What happens when the values of different communities conflict? These are not technical questions. They're the hardest questions in political philosophy, now instantiated in code. A public that can't think clearly about these questions will have those questions answered for them by whoever controls the systems.

The World Peace Angle

Two specific AI governance failures have direct pathways to conflict.

First: autonomous weapons systems. The development of AI-enabled weapons that can identify and engage targets without human decision-making is accelerating. The governance frameworks for these systems — what rules of engagement they operate under, how liability works when they make targeting errors, whether they can be used in ways that escalate conflicts faster than human decision-making could de-escalate them — are lagging the technology dramatically. A public that understands both the technical capabilities and the strategic instability risks of autonomous weapons is a public that can demand binding international agreements before those systems are deployed at scale. We have a window. It is closing.

Second: information warfare. AI-generated disinformation, deepfakes, and synthetic media are already destabilizing political environments. The same technology that produces plausible-sounding text and realistic-looking video can fabricate the evidence for atrocities, manufacture consent for wars, and undermine the trust that international agreements require to hold. A population that cannot distinguish authentic evidence from synthetic evidence — or that doesn't understand how such systems work — is maximally vulnerable to this. A population with media literacy sophisticated enough to engage with AI-generated content critically is substantially less vulnerable.

Neither of these is a small problem. Both of them have direct causal pathways from "AI governance failure" to "violent conflict." A thinking global population is the prerequisite for the governance structures that prevent those pathways from being traveled.

This is what the premise of this manual means, made concrete. The manual is about thinking. AI governance is the clearest current case where better thinking at civilizational scale changes material outcomes for everyone on the planet — not in the abstract, but in the specific, traceable sense of preventing specific categories of catastrophe.

Start with the objective function. That's always where to start.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.