How The International Movement For Ethical AI Encodes Shared Human Values
The Current Landscape
AI governance is fragmented across national, regional, and institutional levels, with no unified global framework.
The EU AI Act (2024) is the most comprehensive legislative framework. It classifies AI systems by risk level — unacceptable (banned), high (heavily regulated), limited (transparency obligations), and minimal (unregulated). Banned applications include social scoring systems, real-time biometric surveillance in public spaces (with exceptions), and manipulation of vulnerable groups. High-risk applications — including AI in hiring, education, law enforcement, and healthcare — must meet requirements for transparency, human oversight, data quality, and non-discrimination.
UNESCO's Recommendation on the Ethics of AI (2021) was adopted by all 193 member states — the first global normative framework for AI. It establishes principles including proportionality, safety, fairness, sustainability, privacy, human oversight, transparency, responsibility, and awareness. It's non-binding, which limits its enforcement, but its universal adoption is significant.
The OECD AI Principles (2019) were endorsed by 46 countries. They emphasize inclusive growth, sustainable development, human-centered values, transparency, robustness, and accountability. The OECD also established the AI Policy Observatory to monitor implementation.
China's AI governance takes a different approach: heavy regulation of specific applications (deepfakes, recommendation algorithms, generative AI) combined with state-directed development. China's regulatory framework prioritizes social stability and state control alongside innovation — a values set that differs substantially from the EU's rights-based approach.
The United States has the weakest regulatory framework among major AI powers. Executive orders and voluntary commitments from companies have substituted for legislation. The AI Safety Institute, established in 2023, has limited authority. Market-driven development remains the dominant paradigm.
---
The Values Problem
Every AI system encodes values. The question is which ones, chosen by whom.
Optimization targets are value choices. When a social media algorithm is optimized for engagement, that's a value choice — it prioritizes time-on-platform over user well-being, accuracy, or social cohesion. When a hiring algorithm is trained on historical hiring data, it encodes the biases present in that data — including racial, gender, and class biases. When a predictive policing algorithm is trained on arrest data, it encodes the policing patterns that produced that data — overpolicing of poor and minority neighborhoods.
These are not bugs. They're features of a development process that treats technical performance as the primary metric and treats value alignment as an afterthought.
Bias at scale. ProPublica's 2016 investigation of the COMPAS recidivism prediction algorithm found that it was twice as likely to falsely flag Black defendants as high-risk compared to white defendants. Amazon's experimental hiring algorithm, trained on a decade of resumes, penalized resumes that included the word "women's" (as in "women's chess club"). These are well-documented cases. For every documented case, there are thousands of undocumented ones — algorithmic decisions affecting credit, housing, insurance, and employment that encode biases invisible to those affected.
The representation gap. AI development teams are overwhelmingly concentrated in a few countries and demographic groups. A 2023 Stanford HAI report found that the U.S. and China produce approximately 60% of the world's notable AI models. Within those countries, development teams are disproportionately male (roughly 80% in the U.S.) and drawn from elite universities and socioeconomic backgrounds.
The values of these teams — what they optimize for, what they consider edge cases, what they test for, what they overlook — inevitably shape the systems they build. When the team that designs a facial recognition system doesn't include many dark-skinned people, the system performs worse on dark-skinned faces. When the team that designs a language model draws primarily from English-language training data, the model performs worse in other languages. Representation isn't just an equity issue. It's an accuracy issue.
---
The Democratic Governance Challenge
The fundamental challenge is: how do you democratically govern a technology that most people don't understand, that's being developed faster than regulators can keep up, and that's concentrated in the hands of a few extremely powerful actors?
The knowledge asymmetry. AI companies understand their systems better than regulators, journalists, or the public. This asymmetry creates a dependency — regulators must rely on companies to explain what their systems do, which creates obvious conflicts of interest.
The speed asymmetry. Legislation takes years. AI development cycles take months. By the time a regulation is implemented, the technology it targets may be obsolete and new, unregulated capabilities may have emerged.
The power asymmetry. The companies building the most powerful AI systems are among the wealthiest and most politically influential organizations in human history. Their lobbying budgets dwarf those of civil society organizations working on AI ethics. Google, Meta, Amazon, Microsoft, and Apple spent a combined $65+ million on U.S. lobbying in 2023.
The geographic asymmetry. AI governance frameworks developed in the EU or U.S. may be imposed on populations that had no voice in their design. Conversely, AI developed without governance (in countries with weak regulatory frameworks) can be deployed globally.
---
Framework: AI Governance as Species-Level Decision
Here's the Law 1 angle. AI governance is not a technology policy question. It is a species-level decision about what kind of future we're building.
If AI is governed by market incentives alone, it will optimize for profit — which means engagement, efficiency, and prediction at the expense of equity, privacy, and human agency. This is not speculation. It's what's happening.
If AI is governed democratically — with genuine representation of the populations affected, transparency about how systems work, accountability when they cause harm, and value alignment determined through inclusive deliberation — then AI becomes a tool for shared human flourishing rather than concentrated corporate power.
The question is whether we can build governance structures at the speed and scale required. The EU AI Act is a start. UNESCO's framework is a start. But neither includes the voices of the billions of people in the Global South who will be most affected by AI systems they had no part in designing.
"We are human" means everyone gets a seat at this table. Not just the people building the systems. Not just the people profiting from them. Everyone whose life will be shaped by them — which is everyone.
---
Practical Exercises
1. Algorithm audit. Pick one algorithm that affects your daily life — a social media feed, a search engine, a recommendation system. Research how it works, what it optimizes for, and who built it. Ask: whose values does this system encode? Whose are missing?
2. The decision replacement test. Identify one decision in your life that is now made or influenced by an AI system (credit scoring, job application screening, insurance pricing, content recommendation). Ask: if a human made this decision, what would I expect from them? Transparency about their reasoning? Ability to appeal? Non-discrimination? Why should we expect less from a machine?
3. The global voice exercise. Imagine you're a farmer in rural India. An AI system has been deployed to determine your crop insurance payout. You don't understand how it works. You can't appeal its decision. The company that built it is in California. How does this feel? What would you want? Write it down. That perspective is missing from most AI governance conversations.
4. Values identification. Write down five values you believe should govern AI systems that affect your life. Compare them with the OECD AI Principles and the EU AI Act's requirements. Where do they align? Where do they diverge? What's missing?
---
Citations and Sources
- European Parliament (2024). AI Act. Regulation (EU) 2024/1689. - UNESCO (2021). Recommendation on the Ethics of Artificial Intelligence. UNESCO. - OECD (2019). OECD Principles on AI. Organisation for Economic Co-operation and Development. - Angwin, J., et al. (2016). "Machine Bias." ProPublica, May 23, 2016. - Stanford Institute for Human-Centered AI (2024). AI Index Report 2024. Stanford HAI. - Buolamwini, J., & Gebru, T. (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of Machine Learning Research, 81, 1–15. - Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs. - Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.
Comments
Sign in to join the conversation.
Be the first to share how this landed.