Humans logo

Can AI Mend A Country I No Longer Trust?

Notes from an AI Architect in Exile

By Joshua EstrinPublished 5 days ago 4 min read

From my desk, I watch the same cycle play out on American news: a crisis, a hearing, a soundbite, a fundraising email—and then nothing. Democracy has started to feel less like a system of governance and more like a piece of software permanently labeled “beta,” shipped with known bugs and no real plan to patch them.

I build and deploy AI systems for a living. I’ve watched machine‑learning models quietly add tens of millions of dollars in revenue to companies by doing something very simple that most leaders resist: telling the truth about what is actually happening in their data. That experience leaves me with an uncomfortable question. If we can use AI to debug complex businesses, can we use it to help mend a country that no longer trusts itself?

America as unpatched code:

Most democracies aren’t designed for the volume and velocity of information we live with now. The American version certainly isn’t.

Policies get introduced like new product features—big launch, big rhetori without serious stress‑testing of how they’ll perform in the wild. Elections are run as marketing campaigns optimized for emotion, not accuracy. The incentive structure rewards outrage and short‑term wins over long‑term stability. In software terms, this is how you end up with a code-base that technically runs, but only because everyone is afraid to touch it.

When I made the decision to leave, it wasn’t because of one election or one politician. It was the accumulation of unresolved bugs: gerrymandered districts that made certain votes meaningless, information ecosystems tuned for maximum rage‑per‑click, policies written without any serious modeling of downstream impact. At some point, “we’ll fix it next cycle” began to sound like “we’ll fix it in the next release,” and I’d seen enough failed products to know how that story ends.​

Where AI could actually help mend things:

AI is not magic. But it is very good at three things America is very bad at right now: stress‑testing ideas, surfacing inconvenient patterns, and dealing with scale. Policy stress‑testing before real people pay the price.

Before a major company launches a new product, good teams simulate demand, failure modes, and edge cases. We can do the same for public policy. Advanced models can ingest historical data, economic indicators, and behavioral patterns to simulate how a proposed law is likely to play out across regions, industries, and income levels.

Imagine a requirement that any major piece of legislation come with an AI‑driven “impact report”: who benefits, who loses, and where unintended consequences are most likely to appear. Not as a final verdict, but as a starting point for serious debate.

Surfacing bias and blind spots in critical systems:

In commercial work, we use AI to analyze millions of data points and uncover patterns humans either can’t see or won’t admit. The same methods can reveal disparities in policing, sentencing, healthcare access, hiring, and lending.

Instead of arguing from anecdotes, leaders could confront pattern‑level evidence: for this policy, in this county, with this demographic, here is what is actually happening. It doesn’t settle moral arguments, but it stops debates from being entirely unmoored from reality.

Information triage in a polluted attention economy:

One of the quiet superpowers of well‑designed AI is triage. Models can prioritize which signals matter, flag contradictions, and cluster related facts so decision‑makers aren’t drowning in noise.

Used correctly, AI could help journalists, staffers, and even voters sort claims by evidentiary support, highlight what’s missing from a narrative, and distinguish between problems that feel urgent and problems that are genuinely systemic. Right now, most public discourse does the opposite.

In each of these cases, AI is not replacing human judgment. It is creating a clearer, more honest starting line for it.

Where AI will make the wounds worse:

Of course, the same technology that can help mend a democracy can also quietly tear it further apart. AI becomes dangerous in politics when three things happen. It is trained on the very biases we claim to be fixing.

If you feed a model years of skewed policing data or partisan media, it will faithfully learn those biases and present them back as “insight.” Without rigorous oversight, AI can turn existing injustice into efficient, scalable injustice. Leaders treat it as a scapegoat instead of a mirror.

We’ve already seen companies blame “the algorithm” for decisions they don’t want to own. In a political context, that move is even more corrosive. “The model recommended this” is not an excuse; it’s an admission that you outsourced courage. It is optimized for engagement, not outcomes.

If we simply bolt AI onto existing media and campaign structures, it will do what it does best: optimize. If the underlying objective is clicks, donations, or time‑on‑platform, the models will happily serve up more outrage, more division, and more personalized propaganda. AI will not save us from bad incentives; it will supercharge them.

The question isn’t whether AI will influence American democracy. It already is. The only question is whether we use it to interrogate our systems or to harden them.

AI as repair tool, not redeemer:

I left the United States when it became clear that the people steering the ship were more interested in winning news cycles than repairing the hull. From a distance, it’s tempting to write the whole experiment off as doomed. But exile has a way of clarifying what you still care about.

I continue to work at the intersection of AI and strategy because models, unlike politicians, can be retrained. When an AI system produces flawed outcomes, we examine the data, adjust the architecture, and try again. There is humility built into that process: we assume we are missing something, and we prove ourselves wrong or right with evidence.

If America wants to mend itself, it will need to adopt that mindset. Use AI to run the uncomfortable simulations, to expose the patterns no one wants to see, to force transparency into conversations that have survived on spin. But then insist that human beings—elected, accountable, and visible—make the final call and stand behind it.

AI can help repair a broken democracy. It can’t supply the courage to admit it’s broken in the first place.

Written where human nervous systems and machine logic collide. AI‑assisted, human owned.

humanity

About the Creator

Joshua Estrin

Joshua Estrin, PhD, walked out the night Donald Trump won again—when the country went full Handmaid’s Tale and called it democracy. Neurospicy therapist and AI strategist writing where data, trauma, queerness, and politics collide

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.