Think and Save the World

How Global Mental Health Improves When Billions Stop Consuming Rage-Optimized Media

· 7 min read

To understand what changes when billions of people stop consuming rage-optimized media, you first have to understand with some precision what rage-optimized media actually does to the brain and why it became so economically dominant.

The algorithmic discovery of outrage. Social media platforms in the early 2010s made a discovery that was obvious in retrospect and catastrophic in practice: emotionally activated content gets significantly more engagement than emotionally neutral content, and outrage-activating content gets the most engagement of all.

This wasn't a philosophical choice. It was an empirical finding from A/B testing at scale. Content that made users angry got more likes, more shares, more comments, more time-on-platform than content that made them informed, entertained, or even happy. Happiness and contentment, it turned out, are not particularly sticky emotional states for continued scrolling. Outrage is sticky. Fear is sticky. Social comparison anxiety is sticky.

Once this finding was operationalized in recommendation algorithms — optimize for the engagement metric, which responds to emotional activation — the platforms had built a machine that systematically surfaced the most emotionally activating content available in the corpus at any given moment. This meant: the most extreme political positions, the most alarming health claims, the most inflammatory intergroup rhetoric, the most outrage-generating interpretations of real events.

No one designed this as a psychological weapon. The engineers who built these systems were optimizing for a business metric. The outcome — a global information environment systematically calibrated to maximize outrage and anxiety across billions of users — was an emergent property of the optimization pressure, not an intent.

The outcome is what it is regardless of intent.

What chronic outrage activation does to the brain. Outrage is a specific emotional state with specific neurological correlates. It activates threat-response systems — the amygdala in particular — while suppressing prefrontal regulatory activity. In acute doses, this is adaptive: outrage in response to genuine injustice mobilizes action.

In chronic doses — the kind produced by several hours daily of algorithmically selected outrage content — the effects are different. The threat-response system remains activated without the resolution that genuine threat encounters provide. Cortisol and adrenaline are present without the physical discharge of fight or flight. The regulatory systems that are supposed to down-regulate emotional arousal after threat resolution don't get the signal that the threat has resolved, because the algorithmic environment immediately surfaces a new threat.

The result is a kind of sustained sympathetic arousal that the nervous system experiences similarly to chronic stress. The downstream effects of chronic stress are well-documented: impaired immune function, sleep disruption, increased vulnerability to anxiety and depression, deterioration of executive function, social withdrawal or conflict escalation in relationships.

The mental health data over the past decade shows exactly this pattern in high-exposure cohorts. Adolescent girls, who are both the heaviest social media users and the most vulnerable to social comparison features, have shown the most dramatic increases in anxiety, depression, and self-harm rates. The correlation is not proof of causation, and researchers like Jean Twenge and Jonathan Haidt have debated the magnitude with researchers like Candice Odgers and Andrew Przybylski. But the pattern is real even in the contested research, and the mechanism is plausible enough that the argument "social media is probably fine" is increasingly hard to make with a straight face.

The cognition effects are the underrated story. The mental health story gets the headlines. The cognition story might matter more.

Chronic emotional arousal — specifically the outrage/anxiety cocktail that algorithmic media delivers — doesn't just make people feel bad. It actively impairs the cognitive functions that Law 2 is about.

Attention span and depth of processing. The infinite scroll format, combined with frequent emotional activation, trains a pattern of rapid, shallow engagement: scan, react, move on. The neural pathways for sustained, patient engagement with a complex argument are literally being used less. Habits of attention are formed through practice, and the practice most heavy social media users get is the opposite of what careful reasoning requires.

Belief rigidity. Outrage is an identity-reinforcing emotion. When you feel outrage about something, you're not in a state that's receptive to reconsidering your position. Chronic outrage about tribal enemies — which is what political social media primarily delivers — reinforces tribal identities and makes the nuanced, cross-identity reasoning that complex collective problems require extremely difficult.

Tolerance for complexity. Algorithmic media is by design binary: content either works (gets engagement) or it doesn't. The nuanced take, the both-sides-have-a-point analysis, the it-depends-on-the-context position — these are algorithmically penalized because they don't activate strong emotional responses. Users learn, through exposure, that the world is composed of heroes and villains, not of people with different values navigating genuine tradeoffs. This epistemological simplification is cognitively damaging in ways that go far beyond political polarization.

What happens when people quit. There is a modest but growing literature on what happens to people who significantly reduce or eliminate social media use.

The consistent findings: reduced anxiety and depression. Improved sleep. Reduced sense of social comparison and FOMO. More time. Longer attention span for demanding tasks. People report feeling more present in their actual lives and relationships. These effects show up relatively quickly — weeks, not years.

The political effects are less studied but qualitatively interesting. People who step back from heavily algorithmic media often report that political issues seem less urgent and less personally threatening than they did when they were consuming daily. Not that the issues matter less — but that the chronic sense of emergency and existential threat to identity that drives political social media engagement diminishes. People can think about political questions rather than only react to them.

This is significant: the chronic emergency framing that algorithmic political media creates is itself a reasoning impairment. When everything is a crisis and every election is the most important of your lifetime and every political opponent is an existential threat — reasoning becomes nearly impossible. The calm required for patient analysis is incompatible with sustained crisis mode.

The aggregate civilizational effect. Now scale this to billions.

A world where billions of people spend several hours daily in states of algorithmically induced outrage and anxiety is a world with severely impaired collective reasoning capacity. It's not that everyone has become stupid. It's that the information environment is systematically preventing the cognitive states that careful collective reasoning requires.

The civilizational consequences show up in: political polarization that makes collective action on shared problems nearly impossible; declining trust in institutions and expertise that makes evidence-based governance difficult; increasing vulnerability to demagogic manipulation that specifically targets outrage; declining social capital as communities fragment along tribal media lines; and a chronic public health burden of anxiety and depression that consumes enormous healthcare resources and human potential.

A world where billions of people are in algorithmically maintained outrage states is a world where the problems this manual addresses — world hunger, persistent poverty, preventable death, institutional failure — cannot be collectively addressed, because the cognitive infrastructure for collective reasoning is being consumed by the attention economy.

What changes when people step off. The aggregate change from billions of people reducing rage-optimized media consumption is not just the sum of individual mental health improvements, though those are significant. It's a change in the civilizational information environment.

Political communication changes when outrage isn't the dominant currency. The politicians, media figures, and institutions that have optimized for outrage-based engagement lose their advantage. Political communication that actually conveys information, proposes solutions, and treats audiences as reasoning agents becomes more viable — not because people become saints, but because the incentive structure shifts.

Social trust recovers. Most of the research on social media and social trust finds significant negative effects from heavy algorithmic media use. Trust in neighbors, institutions, and other groups declines with exposure to algorithmically curated conflict. When that exposure reduces, trust tends to recover. Trust is the foundation of cooperation, and cooperation is required for everything in this manual.

Mental bandwidth for genuine engagement with complex problems opens up. The cognitive load of managing chronic outrage and anxiety is significant. When it lifts, that capacity becomes available for other things — for deeper engagement with local community, for the kind of patient civic participation that functional democracy requires, for the sustained attention that learning complex things demands.

The design alternative. It's worth being explicit that the current information environment is a design choice, not a law of nature. The platforms were not required to optimize for outrage. They chose it because it was profitable. Different design choices produce different outcomes.

Information environments designed to inform rather than activate — to increase understanding rather than increase engagement — are technically possible. They're less profitable in the current advertising-based business model. Changing the business model (subscription, public funding, regulated utilities) changes the optimization target. Changing the optimization target changes the information environment. Changing the information environment changes the mental health and cognitive functioning of the population that lives in it.

This is a governance and policy question, not a user behavior question. Telling individuals to consume less social media while leaving the environment unchanged is analogous to telling people to exercise more while leaving the food system that produces chronic disease unchanged. Individual behavior change matters, but the system is producing the outcome and the system is what has to change.

A thinking planet — one where billions of people can reason about the information environment they live in — can name this mechanism, understand it, and demand different designs. It can regulate the attention economy the way it regulates other industries that externalize costs onto public health. It can fund alternative information architectures that serve epistemic rather than commercial goals.

That's Law 2 applied to the information environment itself: thinking clearly about what's being done to our thinking, and demanding something better.

The mental health improvement that follows isn't a side effect. It's the signal that the thinking environment has improved. And a civilization of people thinking clearly — even imperfectly, even with genuine disagreements, even with the full range of human emotion intact — is equipped to solve the problems this manual is about in a way that a civilization of people in algorithmic outrage loops simply is not.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.