How Algorithms Decide What You Think About
The Machine That Decides What You Think About
There's a useful thought experiment. Imagine you had a personal assistant whose job was to curate everything you read, watch, and discuss. Now imagine that assistant was paid not by you, but by advertisers — and their bonus was based entirely on how long you stayed glued to your screen. How much would you trust their editorial judgment?
That's exactly the situation you're in every time you open a major platform. You just don't see the assistant.
Understanding how algorithmic curation actually works — the mechanisms, the documented effects, and the specific ways it reshapes cognition — is one of the more urgent pieces of contemporary media literacy. Because the algorithms are running whether or not you understand them. And they're working.
How Recommendation Systems Actually Work
Modern recommendation algorithms use a few core mechanisms. The most important is collaborative filtering: the system looks at users with similar behavioral profiles and recommends what those users engaged with. This sounds reasonable until you realize it creates feedback loops — the more you engage with a type of content, the more you see it, the more you engage, the deeper the groove gets.
The second mechanism is engagement optimization. The system is trained on signals like watch time, shares, comments, and return visits. It gets better at predicting what you'll engage with. But here's the subtle problem: engagement is not the same as comprehension, satisfaction, or benefit. People engage heavily with content that angers them. They dwell on things that make them anxious. They share things that confirm their existing beliefs. The algorithm is a prediction engine for engagement — not a recommendation engine for your benefit.
A 2021 internal Facebook study (later leaked and reported by the Wall Street Journal) found that the company's own researchers had concluded its algorithms were "making hate worse" — that inflammatory content traveled faster and further than neutral content, and the engagement-optimization systems were systematically amplifying it. Facebook's response was to largely shelve the proposed fixes because they reduced engagement metrics.
YouTube's recommendation system was documented in similar terms by Guillaume Chaslot, a former Google engineer who helped build it. He found the system had a systematic tendency to push users toward increasingly extreme content — not because extremism was explicitly rewarded, but because extreme content drove longer watch sessions. The algorithm didn't "want" radicalization. It just optimized for watch time, and radicalization happened to produce it.
Epstein's Research On Search Engine Manipulation
Robert Epstein, former editor-in-chief of Psychology Today and a research psychologist at the American Institute for Behavioral Research and Technology, has produced some of the most alarming research on algorithmic influence. His focus has been search engines, specifically Google.
Epstein's key finding, published in the Proceedings of the National Academy of Sciences (2015) and replicated multiple times since: the order of search results can shift the voting preferences of undecided voters by 20% or more — and in some demographic groups, up to 80%. The effect works because people trust search rankings. They assume results are ordered by relevance and credibility, not by whatever optimization criteria the search engine is running.
His team ran double-blind experiments in which they presented people with identical sets of search results, but in different orderings. The group that saw a particular candidate ranked higher became significantly more favorable toward that candidate. When debriefed about potential bias in search results beforehand, the effect was reduced — but not eliminated. The ranking still had influence even on people who knew it might be manipulated.
The deeper problem Epstein identifies is the auditing gap. Facebook's feed algorithm, Google's search algorithm, YouTube's recommendation algorithm — none of them are publicly auditable. You cannot inspect the decision rules. You cannot run controlled comparisons. You are subject to their outputs without access to their logic. This is a remarkable amount of epistemic power held by private companies with no democratic accountability for how it's exercised.
The Filter Bubble And The Distorted Consensus
Eli Pariser's 2011 book "The Filter Bubble" introduced the term that has since become standard. His core observation: when platforms personalize what you see, you increasingly only see content that confirms your existing preferences and beliefs. The world as represented in your feed comes to look like a world where most people agree with you.
This distorted consensus has several effects. First, it makes you overconfident. If you only see your side of debates, you stop having practice with the strongest versions of opposing arguments. Your rebuttals get weaker as the challenges disappear. Second, it makes compromise harder. If you genuinely believe your view is universal consensus, you're less motivated to accommodate perspectives that seem fringe — even when they're actually mainstream among people outside your bubble. Third, it makes you more susceptible to manipulation, because you've lost the habit of encountering and evaluating disconfirming information.
There's a counterargument worth engaging: research by Levi Boxell and others has found that filter bubbles may be less impactful than feared, partly because heavy social media users are often exposed to more diverse viewpoints than light users (who consume more partisan traditional media). This is legitimate nuance. The filter bubble is real but not the whole story. The more precise claim is that algorithmic curation systematically distorts your information environment in ways that serve the platform's engagement interests rather than your epistemic interests — and that distortion compounds over time.
What Algorithmic Curation Does To Cognition
The cognitive effects go beyond belief formation. There's the attention dimension: algorithms train you to consume content in a particular way — fast, reactive, sequential. The UX design of feeds is specifically built to minimize the gap between pieces of content, to keep the scroll going. Over time, this trains a mode of attention that is shallow, reactive, and continuous. This is the opposite of the attention mode required for complex reasoning, deep reading, or genuine deliberation.
There's also the substitution effect. When algorithms deliver content to you, you stop having to seek. You lose the skill of finding things — navigating to sources, following references, building a reading list. Passive consumption replaces active curation. This matters because the act of seeking shapes cognition differently than the act of receiving. When you decide what to read and why, you're exercising judgment. When an algorithm decides, you're not.
The third effect is on what psychologists call "the availability heuristic." What you think about most is what comes to mind most easily, and what comes to mind most easily shapes your sense of what's important and common. If your algorithm feeds you constant crime content, your estimate of crime rates will be inflated. If it feeds you constant political conflict, you'll overestimate political polarization. The algorithm is shaping your base rates — your sense of what the world is like — not just your explicit beliefs.
Practical Frameworks For Resistance
Full algorithmic avoidance is not realistic for most people. What's achievable is conscious partial resistance — building habits that interrupt the default algorithmic path often enough to maintain some epistemic independence.
The direct source stack. For any domain you care about — technology, politics, science, finance — identify five to ten primary sources you trust and access them directly. RSS readers (Feedly, NetNewsWire) make this workable. You follow what you chose to follow, not what an algorithm selected for you.
The newsletter layer. Email newsletters bypass platform algorithms entirely. The author you subscribed to chose to send you that piece. Substack, direct newsletter subscriptions, academic mailing lists — these are algorithmically quieter than social platforms. Not neutral, but at least the curation decision is made by a human whose reasoning you can usually trace.
The search audit. Periodically run searches in incognito mode and compare results to your logged-in searches. Notice the differences. This makes the personalization visible and concrete.
The surprise test. Ask yourself regularly: when was the last time I encountered a piece of information that genuinely surprised me or challenged my existing understanding? If the answer is "I can't remember," your information environment has been optimized into a comfort zone. The algorithm has you.
The primary source discipline. Whenever you see a reference to a study, a speech, a book, or a document — go find the original. The algorithmic information environment is mostly commentary on commentary. Reading the actual document breaks the chain and gives you direct access to something the algorithm did not pre-chew for you.
The structured disagreement practice. Deliberately subscribe to thinkers you fundamentally disagree with — not to be angry at them, but to maintain practice with opposing argument. The goal is not agreement. The goal is calibration: knowing what the strongest version of the other position actually is.
The Bigger Stakes
Here's the macro view. The combination of private, unauditable algorithms and massive behavioral data creates a system where a small number of companies make editorial decisions — at scale, in real time — that shape what billions of people think about, feel outraged by, and believe is true. No newspaper editor in history had that reach. No government propaganda apparatus operated at that speed.
The question isn't whether this is a problem. It is. The question is what you do about it given that the infrastructure isn't going away. The answer starts with understanding the system clearly enough to navigate it with your eyes open. And it starts, practically, with deciding that your information diet is your responsibility — not something to be outsourced to a machine optimizing for someone else's business model.
Comments
Sign in to join the conversation.
Be the first to share how this landed.