This is Part Three in a three-part series of posts exploring the problem of “Woke AI” and how to deal with it. This series covered:
What Woke AI is and why companies might do it (Part One)
How builders accomplish it, including where it is easier and harder to do (Part Two)
How to stop Woke AI and, surprisingly, why this tech fight favors the right (Part Three, below)
Today we move from diagnosing the problem to offering a path forward.
Generative AI is becoming the concierge between us and the world's knowledge. Like any concierge, it can quietly steer us toward certain answers and away from others. The first two parts of this series diagnosed two different steering concerns: algorithmic bias baked into training and the deliberate manipulations I call Woke AI. We mapped the AI development process and found a critical vulnerability in the inference phase, where a single hidden prompt can tilt every response.
Now we move from diagnosis to defense. The central question: How do we stop AI deployers from hard-wiring ideology into AI systems without policing speech or crippling innovation?
The answer is more encouraging than you might expect. Unlike the sprawling regulatory schemes aimed at algorithmic bias, fixes for Woke AI are clearer, simpler, and more constitutional. We just need to pull the right levers.
The Conservative Advantage in the AI Wars
The AI wars will be nothing like the social media wars, and conservatives should understand why this favors them.
In social media, both sides play tug-of-war on the same rope: content moderation. Progressives pull for more takedowns; conservatives pull for fewer. The battle lines are fixed, the trenches are dug, and victories are measured in inches.
But AI is different. Here, conservatives hold a structural advantage that most haven't recognized: while progressives struggle to define what an "unbiased" AI even means, the tools to stop deliberate AI manipulation are clear, constitutional, and ready to deploy.
This asymmetry changes everything.
Why Woke AI Is Easier to Fix Than Algorithmic Bias
Parts 1 and 2 of this series diagnosed two distinct concerns: algorithmic bias (unintentional distortions in the training phase) and Woke AI (deliberate manipulation of outputs). Here's why the second is far easier to address.
Location matters. Algorithmic bias emerges during training, a complex, opaque process involving billions of parameters. Woke AI happens during inference, where companies insert plain-English instructions that tilt every answer.
Detection matters too. You can't easily spot algorithmic bias without sophisticated analysis. But Woke AI? When Google's Gemini refuses to generate images of white Founding Fathers, or when xAI's Grok randomly inserts warnings about "white genocide" in South Africa, users notice immediately.
Most importantly, solutions matter. Fixing algorithmic bias requires defining "fairness" across infinite contexts, triggering a philosophical nightmare. Fixing Woke AI requires transparency about system prompts, solved by a simple disclosure.
This distinction has profound political implications. Progressives focus on the hard problem of eliminating implicit bias. Conservatives can focus on the easy one of preventing explicit manipulation.
The Vulnerable Inference Layer
My technical analysis in Part 2 revealed where manipulation happens: not in the complex models themselves, but in the simple text instructions that wrap around them. Think of it this way: AI training is like raising a child. It is complex, long-term, and hard to control precisely. Whereas AI inference is like giving that child a script to follow: immediate, direct, easily changed.
Companies manipulate AI during inference through system prompts (hidden instructions that precede every user query), output filters (screens that block or rewrite "problematic" responses), and chain-of-thought management (behind-the-scenes reasoning steps that steer conclusions).
These tools are powerful because they're simple. A single sentence in a system prompt ("Always emphasize climate change in discussions of weather") can bias thousands of conversations. And unlike the AI model itself, these prompts are written in plain English.
Target Deployers, Not Developers
This insight leads to a critical strategic point: focus attention on deployers of AI applications, like OpenAI's ChatGPT service or Perplexity, not the developers of underlying AI models. (Even though these are sometimes the same company.)
Why? Because deployers control the inference phase where manipulation happens, interact directly with users who notice bias, can change system prompts instantly, and directly feel market pressure when users detect manipulation.
When ChatGPT users discovered the system was programmed to refuse certain political topics, the backlash was immediate. No legislation required, just angry customers and bad press.
Why State "Anti-Bias" Bills Backfire
Conservative legislators tempted to support bills like Colorado’s SB 24-205, Connecticut’s SB 2 or Texas’s original TRAIGA should reconsider. These laws miss the target by focusing on training-phase bias, not inference-phase manipulation. (No surprise, because these bills have a progressive pedigree.) Worse, they increase manipulation by requiring "reasonable care" to avoid discrimination, which incentivizes companies to add more inference-time filters. They empower bureaucrats through vague standards enforced by potentially hostile regulators. And they burden innovation with compliance costs that favor Big Tech incumbents.
Most dangerously, these bills could legally require the very manipulation conservatives oppose. When laws demand AI systems avoid "algorithmic discrimination," the easiest compliance path is adding ideological filters—turning anti-bias laws into pro-manipulation mandates.
The Free Market, Free Speech Solution
The First Amendment protects AI companies' right to build ideologically biased systems—and that's good. Want a socialist AI? Build one. A libertarian chatbot? Go ahead. The market, not government, should determine which worldviews get encoded in silicon.
But transparency is key. Just as farmers market their goods as KETO-friendly or Cage-Free, AI services should disclose their ideological ingredients. Hiding politicized system prompts while claiming neutrality is deceptive. It's the hidden manipulation, not the viewpoint itself, that violates user trust. When users know a system has been programmed with specific viewpoints, they can choose accordingly.
This approach is constitutional (no speech restrictions), pro-innovation (no regulatory barriers), and pro-consumer (maximum choice and transparency).
Building Free Speech Benchmarks
Markets need metrics. Today's AI benchmarks measure capabilities like math and coding. We need benchmarks that measure ideological openness.
There have been multiple research projects tackling this problem, but no widely-adopted benchmarks. Imagine standardized tests that reveal which systems refuse to discuss certain topics, how different AIs answer politically charged questions, whether outputs systematically favor certain viewpoints, and how transparent companies are about their system prompts.
These benchmarks would create a race to the top. Companies caught manipulating outputs would face immediate comparison to more open competitors. We don’t need regulation, just scorecards and consumer choice.
A Three-Point Action Plan
Here's how conservatives can win the AI wars.
First, demand transparency, not regulation. Push companies to publish system prompts and support industry standards for disclosure while opposing broad "anti-bias" mandates. Use market pressure, not government power.
Second, preserve competition. Fight regulations that favor incumbents. Support open-source AI development. Oppose licensing schemes and compute thresholds that would raise barriers to entry. Keep the field clear of a patchwork of state regulations and open for innovators.
Third, create accountability through measurement. Fund development of ideological bias benchmarks and publicize results widely. Support third-party testing organizations that can make bias detection user-friendly and accessible to ordinary consumers.
The Path to Victory
Unlike social media, where network effects make multihoming more common than switching, AI users can changes providers quickly. This competitive dynamic, combined with the relative ease of detecting manipulation, creates natural pressure against Woke AI.
But this advantage only exists if we keep markets competitive, make manipulation transparent, avoid regulatory capture, and let users choose.
The progressive left wants to solve the unsolvable problem of eliminating all bias through complex regulations. Conservatives can win by solving the simple problem of exposing deliberate manipulation through transparency.
Conclusion: Sunlight Beats Regulation
Woke AI isn't fixed with rulebooks and regulators. It's fixed with sunlight and competition.
When users see system prompts, manipulation becomes marketing copy—subject to choice, rejection, and even mockery. When benchmarks measure ideological openness, companies compete to be more balanced. When switching costs are low, bad actors lose customers.
This approach doesn't require defining "fairness" or empowering bureaucrats. It simply requires what conservatives have always championed: transparency, competition, and consumer choice.
The AI wars are winnable. We just need to fight Woke AI with real intelligence.
Thanks for reading this series. I welcome any and all feedback - those market signals keep my own mental model honest.