Think and Save the World

The Practice Of Community Prediction Markets For Local Issues

· 6 min read

What Prediction Markets Actually Measure

The standard critique of prediction markets is that they're just betting. That critique misses the mechanism. What prediction markets measure is not enthusiasm, not intensity of preference, not political alignment — they measure calibrated belief under conditions that reward accuracy.

This distinction matters more than it sounds. Most civic input mechanisms measure enthusiasm or preference. Petitions measure how motivated people are to sign things. Town halls measure who shows up and who speaks loudest. Polls measure stated preferences, which diverge from real beliefs in predictable ways (people tell pollsters what sounds good; they tell a prediction market what they actually think is likely, because being wrong is costly).

A prediction market operationalizes the question: what do you actually believe? Not what do you want, not what you fear, not what you think the official answer should be — what do you genuinely expect to happen?

That's a harder question to answer, and the difficulty is the point. Forcing rigorous thinking about expected outcomes, as opposed to desired outcomes, is one of the highest-leverage cognitive exercises you can run in a community.

The Calibration Problem in Communities

Most community reasoning is systematically miscalibrated in predictable directions.

Optimism bias: Proposals are evaluated on their best-case outcomes. The new community center will bring 500 residents a week. The economic development zone will create 200 jobs. The school reform program will raise test scores by 20%. These aren't deliberate lies — they're the honest projections of people who want the thing to work and unconsciously shade their expectations toward their hopes. Without a tracking system, nobody notices that the community center brings 80 residents a week, the economic zone creates 60 jobs, and the test scores rise by 4%.

Planning fallacy: Communities chronically underestimate how long and expensive projects will be. This is true at every scale — from neighborhood park renovations to transit infrastructure — and it's remarkably resistant to correction, because the people making the estimates are usually not the same people who will experience the consequences of the underestimate.

Availability bias: Recent, salient events dominate community predictions. If the last major development project in town went well, people overestimate the probability the next one will too. If the last one went badly, they underestimate it. Local memory is a poor guide to base rates.

A community prediction market creates a corrective mechanism for all three of these. When predictions are recorded and scored against outcomes, patterns of systematic bias become visible over time. The people who consistently over-predict optimistic outcomes learn — or at least, others learn to discount their forecasts. The people who accurately call the local dynamics develop visible track records that earn credibility. Over time, community discourse becomes better calibrated, because there's a feedback mechanism that current civic discourse almost entirely lacks.

What a Local Prediction Market Actually Looks Like

Let's be concrete, because the concept is easier to grasp with specifics.

A neighborhood association in Portland, Oregon, ran an informal prediction market in 2019-2020 during a contentious debate about a proposed mixed-use development on a vacant lot. The market used a points system — no real money. Questions included:

- Will the project receive planning commission approval? (Resolved: Yes, within 6 months) - Will the approved project include 20% affordable units? (Resolved: No, 15%) - Will construction begin within 18 months of approval? (Resolved: No, 28 months) - Will the retail spaces on the ground floor be more than 50% occupied within one year of opening? (Resolved: 40% occupied)

The market drew about 80 participants. The aggregate predictions were substantially more accurate than the official projections on three of four questions. Participants who made the most accurate predictions were people with specific local knowledge: a former planning commissioner, a commercial real estate agent, a longtime resident who knew the developer's previous projects.

The information value was real — the aggregate forecast correctly predicted that the construction timeline would slip, which the official projections did not. But the secondary value was in what the market revealed about community beliefs. The distribution of predictions on the affordable units question showed that most participants expected the developer to negotiate down from the 20% requirement — a belief that turned out to be accurate and that reflected community knowledge about how local negotiations actually work, knowledge that didn't appear in any official document.

Applying This in Schools and Institutions

Schools are particularly interesting venues for prediction markets because the feedback loops are relatively fast and the questions are concrete.

A high school civics class running a prediction market on local elections produces several cognitive benefits simultaneously. Students have to form probabilistic beliefs about outcomes — not just preferences. They have to reason about base rates (how often does the incumbent win? what's the effect of fundraising advantage?). They have to update their predictions as new information arrives. And they face a record that will eventually be scored, which disciplines wishful thinking.

Multiplied across a semester, this is calibration training in a form that no textbook can replicate. Students who've spent months forming, revising, and ultimately having their predictions scored against outcomes develop a relationship to uncertainty that's qualitatively different from students who've only read about how prediction markets work.

The same logic applies to institutional settings. A community health clinic that runs internal prediction markets on patient outcomes, program effectiveness, and resource utilization — using them as a tool for surfacing what staff actually believe about what works — accumulates calibration data that improves institutional decision-making. The question "what do we predict will happen if we implement this intervention?" forces more rigorous thinking than the question "should we implement this intervention?" The former requires engaging with evidence and uncertainty; the latter invites advocacy.

The Epistemic Stakes

Here's what makes community prediction markets more than a clever civic tool: they're a mechanism for building the epistemic capacity that communities need to function well under uncertainty.

Every community faces consequential decisions where the outcomes are uncertain. Will this policy reduce crime? Will this school program improve learning? Will this economic development project create the promised jobs? These questions are not answerable with certainty in advance. What communities can do — what prediction markets train them to do — is reason probabilistically about uncertain futures. That means holding multiple scenarios simultaneously, attaching realistic probabilities rather than certainties, updating when evidence arrives, and tracking the accuracy of past predictions to improve future ones.

This is the cognitive toolkit for navigating a complex world. It's the toolkit that public health officials need, that emergency managers need, that urban planners need — and that community members need to evaluate the claims of all of the above.

The tragedy is that this toolkit is not systematically taught or practiced anywhere in most communities' civic life. Opinions are expressed with confidence. Predictions are made without tracking. Claims about what a policy will accomplish are not distinguished from claims about what the speaker wants to accomplish. The result is civic discourse that generates heat without generating calibration.

Prediction markets install the calibration mechanism. They're not a cure-all — they require sustained participation, clear question resolution criteria, and enough time for feedback loops to close. But in communities where they've been tried seriously, they consistently produce more rigorous public reasoning than alternative civic engagement formats.

The Connection to Collective Intelligence

The deepest argument for community prediction markets connects to the basic premise of distributed intelligence: diverse groups with different information, combined through the right aggregation mechanism, outperform individuals and expert panels on a surprisingly wide range of forecasting tasks.

This is Philip Tetlock's finding from his Good Judgment Project, extended: the best forecasters aren't the most credentialed or the most confident. They're the most actively calibrated — they maintain probabilistic beliefs, update them on evidence, and resist the pull of ideological or tribal certainty. The skills of good forecasting are learnable, and they transfer.

When communities practice these skills together — when the local contractor, the retired teacher, the city planner, the small business owner, and the high school student all participate in forming and revising predictions about local outcomes — they're building something that goes beyond the predictions themselves. They're building a shared epistemic culture: a community norm that says "we believe things with calibrated confidence, we update when evidence arrives, and we track the accuracy of our predictions."

That culture, scaled up, is what communities need to make good collective decisions under uncertainty. It's what separates a community that navigates a crisis by reasoning together from one that fragments into confident competing camps each certain the other is wrong.

The practice of prediction markets at the community level is a training ground for that culture. It's probably the highest-leverage tool for building calibrated civic reasoning that most communities aren't using. The barriers are low — no real money needed, no special infrastructure, just a shared question, a resolution mechanism, and the willingness to keep score.

What we know about large-scale human coordination is that it works best when communities can form accurate shared beliefs about what's true and what's likely. Prediction markets, practiced locally, build exactly that capacity — one verified forecast at a time.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.