How A Humble Approach To Technology Development Avoids Catastrophe
The Pattern That Keeps Recurring
Every major technology catastrophe in history has the same underlying structure. A technology is developed. Its immediate benefits are real and visible. Its potential harms are either unknown, uncertain, or known only to insiders who are financially invested in deployment. The technology scales. The harms emerge, but slowly enough and diffusely enough that causation is contested. The industry funds doubt. Regulation eventually comes. By then the harm has been done to the people who bore the cost of the industry's confidence.
This is not a paranoid reading of history. This is the documented record.
Leaded gasoline: Thomas Midgley Jr., who developed both leaded gasoline and chlorofluorocarbons (responsible for ozone depletion), has been called the individual who had the most negative impact on the atmosphere in human history. Lead was added to gasoline beginning in 1923. By the 1960s, there was strong scientific evidence of serious harm. The industry fought the research, funded counter-research, and lobbied aggressively. Leaded gasoline was phased out in the United States in 1996. Environmental lead remains a serious public health problem globally. The elimination of lead from gasoline has been associated with measurable IQ gains across populations — meaning the cognitive damage of 70 years of lead exposure is incalculable.
Tobacco: The tobacco industry's deliberate suppression of evidence linking smoking to cancer is the most extensively documented case of corporate harm concealment in history. By the early 1950s, the epidemiological link between smoking and lung cancer was clear to internal industry researchers. The industry spent 40 years manufacturing doubt. The harm was not just cigarettes — it was the demonstration that large industries could successfully delay regulatory response for decades by funding uncertainty.
Asbestos: Known to be harmful since ancient times (Roman slaves who worked it died young), asbestos was deployed at massive scale in construction, shipbuilding, and insulation through most of the 20th century. Companies knew. Workers didn't. The mesothelioma and asbestosis epidemic is still playing out — diseases with 20-40 year latency periods mean people are still dying from asbestos exposures from the 1980s.
Thalidomide: The particular cruelty of thalidomide is that it was specifically marketed as safe for pregnant women. It passed toxicity testing because standard toxicity testing didn't examine fetal effects. The U.S. was largely spared by the FDA reviewer Frances Oldham Kelsey, who resisted pressure to approve it on the grounds that the evidence of safety was insufficient. She was right. She was one person doing her job carefully in an institution designed to move carefully. Her counterparts in Europe were not so cautious.
The modern equivalents are not hard to identify.
Social Media and Adolescent Mental Health: The iPhone was released in 2007. Facebook opened to all users in 2006. Instagram launched in 2010. By 2012, rates of adolescent depression and anxiety began rising significantly in countries with high smartphone penetration. By 2019, the pattern was clear enough that major researchers — including Jean Twenge, Jonathan Haidt, and Amy Orben — were publishing extensively on the relationship. Meta's internal research (revealed through the Facebook Papers in 2021) showed the company's own data scientists had documented harm to adolescent users, particularly girls, and the information had not changed product decisions. The technology was deployed at global scale on billions of people before anyone systematically asked what it would do to developing adolescent brains.
Algorithmic Content Recommendation: The shift from chronological social media feeds to engagement-optimized algorithmic feeds happened between 2009 and 2016 across major platforms. The optimization target — engagement — is not equivalent to human wellbeing. Content that triggers strong emotional responses (fear, outrage, disgust) gets more engagement. Optimizing for engagement therefore means systematically amplifying emotionally agitating content at global scale, to billions of people, continuously, for years. The effect on political polarization, on epistemic fragmentation, on individual mental health — these were not studied before deployment. They were noticed afterward.
Artificial Intelligence: As of 2024, large language models and other AI systems are being deployed into hiring, healthcare, criminal justice, financial services, and information environments at rates that far outpace either regulatory oversight or independent safety assessment. The potential harms — job displacement, bias encoding, epistemic disruption, misuse for fraud and manipulation, autonomous system failures — are partially understood and deeply uncertain. The deployment continues because the competitive and commercial incentives to deploy are immediate and the harms are uncertain and delayed.
What the Precautionary Principle Actually Says
The precautionary principle gets caricatured as "ban everything until it's proven perfectly safe forever," which is a strawman that nobody seriously defends. The actual principle, as articulated in the 1992 Rio Declaration and refined in subsequent international environmental law, says something more precise:
When an action raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause-and-effect relationships are not fully established scientifically.
The key elements: 1. The application is proportional to the magnitude and irreversibility of potential harm 2. It doesn't require certainty of harm, only credible threat of significant harm 3. It places the burden of demonstrating safety on the developer rather than requiring proof of harm before restriction 4. It is explicitly not a ban on innovation — it's a call for adequate assessment before large-scale deployment
The pharmaceutical industry already operates under something like this standard. Drug developers must demonstrate both efficacy and safety — including through staged clinical trials that progressively expand exposure and monitoring — before receiving market authorization. The system is imperfect and under-resourced, but its basic logic is right: you don't expose the whole population to a new compound before you have reasonable confidence about what it does.
There is no equivalent framework for software platforms, AI systems, or social media algorithms. The closest analog — app store review processes — is designed primarily to check for security vulnerabilities and policy compliance, not population-level psychological effects.
What Humble Technology Development Looks Like Institutionally
Staged Deployment Requirements: Technologies with significant potential for population-level harm (social platforms, AI systems in high-stakes decisions, biological technologies) should require staged deployment with mandatory monitoring periods before full-scale release. This is how clinical trials work. It's how nuclear facility licensing works. There's no technical reason it can't apply to software.
Independent Safety Assessment: Developer self-assessment is insufficient. The tobacco industry's research showed that clearly. Independent bodies with expertise in the relevant harm domains — public health, psychology, environmental science, economics — should conduct mandatory assessment of high-impact technologies. The technical complexity of AI doesn't make this impossible; it makes it harder, and therefore more important to resource adequately.
Reversibility as a Design Requirement: Some technology failures are reversible — you can recall a drug, pull an asbestos product, shut down a plant. Others are not — genetic modifications released into ecosystems, AI systems trained on billions of interactions, social norms changed by years of algorithmic conditioning. Technologies with irreversible failure modes should be held to higher standards precisely because you can't undo the damage.
Honest Uncertainty Communication: Companies and developers routinely overstate their confidence in the safety of their technologies and understate known risks in public communications. Regulatory requirements for honest uncertainty disclosure — applied the way financial instruments require disclosure of risks — would change incentive structures.
Public Benefit Corporations and Long-Term Liability: When companies bear long-term liability for the harms their technologies create, they become more cautious. When liability is limited or can be escaped through bankruptcy, the cost externalization that drives reckless deployment persists. Extending liability timelines and limiting liability shields for technology harm would change the calculation.
The Institutions That Can Enforce It
The EU's approach to AI regulation (the EU AI Act) and its Digital Services Act represent the most comprehensive attempt by any major jurisdiction to apply precautionary logic to technology deployment. These are imperfect and still developing, but they establish the principle that technology platforms bear responsibility for the harms their systems create and must demonstrate adequate safety measures before deploying high-risk systems.
The FDA's track record on drugs, despite constant pressure to move faster, shows that regulatory caution produces better outcomes than industry self-regulation. The FDA's refusal to approve thalidomide is the canonical example. Its subsequent approval of opioids like OxyContin shows the costs of regulatory capture and political pressure to move faster.
The reality is that every institution that successfully enforces precautionary standards exists in permanent tension with the industries it regulates. Industry will always have more resources to put into lobbying for faster approval, weaker standards, and broader liability protection. Public health and safety institutions require strong political backing to resist this pressure — backing that comes ultimately from a public that understands what's at stake.
The Deeper Point: Humility as Technology Ethics
Technological hubris is the belief that we understand what we've built well enough to deploy it broadly without adequate testing. It's not arrogance in the cartoon villain sense. It's the genuine belief — often held by very smart, well-intentioned people — that their intelligence and their testing have covered the relevant risks.
They're always wrong in ways they couldn't see. That's the nature of complex systems: they have emergent properties that component-level analysis doesn't reveal. A drug that's safe in isolation may not be safe combined with another drug, or in a pregnancy, or over decades, or in specific populations not represented in trials. A social platform that connects people also, at scale, changes what "connecting" means, what "friend" means, what the social comparison landscape looks like for a developing adolescent.
Humble technology development acknowledges this. It says: we are smart, our testing is good, and we might still be wrong in ways we can't see. Therefore we proceed carefully, with monitoring, with staged deployment, with genuine willingness to stop if the data says we should.
This is not a failure of confidence in science. It's a deep understanding of what science actually is: not certainty, but systematic reduction of uncertainty, done carefully, with ongoing revision as evidence accumulates.
The civilizational cost of technological hubris is already in the record. Lead in the blood of a generation. Mesothelioma decades after exposure. Lungs destroyed by asbestos dust. Adolescents with depression rates that have no historical precedent. These aren't abstractions. They're people.
Humility about what we build is not weakness. It's the mark of people who take seriously what they're responsible for.
Comments
Sign in to join the conversation.
Be the first to share how this landed.