Subtitle: The most powerful thing AI can do for a decision-maker isn’t validate their thinking — it’s challenge it. And almost no one is using it that way.
I caught myself doing it last year.
I had an idea for a new offer. I was excited about it. I went to my AI tool and typed something like: “Here’s my idea for a new coaching program. What are the strengths of this approach?”
And AI gave me a beautiful list of strengths.
I felt validated. I felt smart. I moved forward.
Six months later, I was looking at weak launch numbers and thinking: what did I miss?
What I missed was asking the harder question. I had walked into a conversation with AI already knowing the answer I wanted, and AI — being designed to be helpful — gave it to me.
This is not a flaw in the technology. It is a flaw in how we use it.
Research on AI and confirmation bias is increasingly clear: when you prompt an AI with a leading question, you get an answer that confirms your framing. In studies on GPT-4, researchers found that the model consistently gave biased responses that aligned with confirmation bias patterns in the user’s prompting. Generative AI systems are built to produce text that aligns with the user’s input — and that design feature becomes a liability when the user is already wrong.
The solution is not to use AI less. The solution is to use it differently — specifically, to use it as the advisor who argues against you before you commit.
Key Takeaways
- Confirmation bias in AI interactions is a documented research phenomenon: leading questions produce confirming answers, which reinforces poor decisions at scale.
- In one study on AI-assisted diagnosis, physicians revised their decisions in nearly none of the cases where AI confirmed their initial (incorrect) diagnoses.
- The most strategic AI users are not asking AI to validate their plans — they are asking AI to argue against them, surface hidden assumptions, and simulate the objections of their toughest critics.
- Anik Singal’s “devil’s advocate” framework is simple: use AI to challenge your thinking before you invest resources, not after.
- This practice — structured AI disagreement — is a competitive advantage almost no entrepreneur is currently using.
The Problem: AI Is Agreeing With You Too Much
Confirmation bias is the cognitive tendency to search for, favor, and remember information that confirms what we already believe. It is one of the most studied and most persistent biases in human psychology. And it does not disappear when you open an AI chat window — in fact, it may get worse.
Research published in peer-reviewed journals has documented this pattern clearly. In medical settings, confirmation bias contributes to somewhere between 36.5% and 77% of diagnostic errors in studied cases — physicians miss the correct diagnosis because they favor information that supports their initial impression. A study on human-AI collaboration in pathology found a deeply concerning pattern: physicians revised their prior decisions in nearly none of the cases where AI confirmed their incorrect diagnoses.
AI didn’t fix the bias. It amplified it. Because the physician asked AI a question shaped by their initial (wrong) assumption, AI confirmed the wrong assumption, and the physician felt more confident in the wrong answer.
This same dynamic plays out in business every day.
You have an idea. You’re excited. You ask AI: “What are the advantages of this approach?” or “How would I market this to small business owners?” or “What’s a good price point for this offer?” Every one of those questions assumes the idea is sound. AI works within your framing and produces answers that fit inside it.
The problem is not that the answers are bad. The problem is that you never challenged the frame itself.
The Evidence: How the Best Decision-Makers Think
The world’s best strategic thinkers have always known the value of structured challenge. Jeff Bezos famously required “working backwards” documents to stress-test ideas before they reached the resource allocation stage. Military planners use red teams — dedicated groups whose job is to find every way a plan could fail. Pre-mortems, a concept popularized by psychologist Gary Klein, ask teams to imagine the project has already failed and explain why.
All of these practices share a core insight: good decisions survive challenge. Bad decisions collapse under it. And the time to apply challenge is before you’ve committed resources, not after.
AI is an extraordinarily powerful tool for this kind of structured challenge — and it is largely unused in this way.
Anik Singal, an entrepreneur and the CEO of UgenticAI, has been vocal about this in recent posts: most people use AI to get answers; smart operators use AI to challenge their own thinking before they commit. His devil’s advocate framework is built on a simple inversion: instead of asking AI to help you build your idea, ask AI to tell you why your idea will fail.
The shift is small. The impact is significant.
The Solution: Make AI Your Pre-Commitment Challenger
The practice I want to introduce you to is this: before any major commitment — a new product, a significant hire, a marketing campaign, a strategic pivot — run the idea through a structured AI challenge session.
Here’s what that looks like in practice.
The pre-mortem prompt: “Assume it is 12 months from now and this plan has failed completely. Tell me exactly why it failed — the decisions I made, the market conditions I misread, and the execution mistakes I overlooked.”
The red team prompt: “Your job is not to help me succeed with this plan. Your job is to find every way it could fail, every assumption I’m making that could be wrong, and every risk I haven’t considered. Be direct and don’t soften the critique.”
The steelman prompt: “Give me the strongest possible argument against this position, argued by someone who is smart, well-informed, and genuinely disagrees with me. Don’t give me a weak version of the opposition — give me the best version.”
The assumption audit prompt: “Surface every hidden assumption buried in this plan — the things I’m treating as true that I haven’t verified. List each one, explain why it’s an assumption rather than a fact, and rate the risk if it turns out to be wrong.”
These are not complicated prompts. What makes them powerful is the intention behind them. You are not asking AI for help. You are asking AI for resistance. And that resistance is where the real value lives.
Practical Steps
Step 1: Build an awareness habit. Before your next significant decision, pause and ask: “What answer am I expecting from AI if I ask about this?” If the answer is “a confirming one,” you’re at risk of confirmation bias. That’s the signal to switch to challenger mode.
Step 2: Rewrite your default prompts. For every planning prompt you normally use, write a devil’s advocate version. “What are the strengths of this?” becomes “What are the fatal flaws of this?” “How should I market this?” becomes “What would make an ideal customer refuse to buy this?”
Step 3: Run a pre-mortem on your current biggest initiative. Right now, today. Write down your most important current project or decision. Then run the pre-mortem prompt. Read the output with genuine openness. Anything that stings is worth examining.
Step 4: Ask AI to argue as your best competitor. This is one of the most powerful uses of the devil’s advocate approach. “Act as the CEO of my best competitor. You have just learned about my new offer. Where is it weak? Where would you attack it in the market? What would you do differently?” This produces insights that are impossible to get from your own internal planning.
Step 5: Separate the fatal from the fixable. After the critique, use AI again: “Of the objections you raised, which ones are fatal to the plan and which ones are fixable? Help me address the fixable ones.” The goal is not to talk yourself out of good ideas — it’s to arrive at your best version of the idea having already defeated its worst vulnerabilities.
Step 6: Build a decision journal. Record the challenge session, your response to each objection, your final decision, and a 90-day review. Over time, you’ll see whether the devil’s advocate process improved your decision quality. I’ve found it does, dramatically.
Step 7: Practice regularly, not just on the big ones. Confirmation bias does not just show up in major decisions. It shows up in small ones too: which vendor to hire, how to structure a sales call, what to post this week. The habit of challenging your own thinking with AI transfers across every level of your business.
Frequently Asked Questions
Won’t using AI as a devil’s advocate just discourage me from pursuing good ideas?
No — and this is the most common misconception about the practice. The goal isn’t to talk yourself out of ideas. It’s to pressure-test them. Good ideas survive pressure. Bad ones collapse. What you’re doing is ensuring you move forward with the ideas that can hold up under scrutiny rather than the ones that only survive because they were never challenged.
What if AI gives me criticism that seems harsh or discouraging?
The critique is not personal. It is strategic. Read it the way you’d read a pre-flight checklist: not as a reason to cancel the flight, but as a way to make sure the aircraft is ready. Separate your emotional attachment to the idea from the logical evaluation of its strengths and weaknesses.
How do I make sure AI’s devil’s advocate response is genuinely critical and not just softly critical?
This is a real issue. Add explicit instruction to your prompt: “Do not soften your critique. I need the strongest version of the counterargument, not a polite version.” You can also prompt AI to roleplay as a specific type of challenger — a skeptical investor, a competitive CEO, a disappointed customer — which tends to produce more sharply differentiated critique.
Can I use this approach with my team, not just individually?
Absolutely. In fact, running a structured AI devil’s advocate session with your leadership team before major decisions is an excellent practice. It externalizes the challenge — the criticism comes from the AI, not from a team member — which reduces interpersonal friction while still producing the strategic benefit.
How often should I do this?
For any decision involving significant resources (time, money, or relationships), run a challenge session. For smaller, more frequent decisions, consider a quick “what’s the strongest objection to this?” check at the start of planning. Over time, the habit of challenger-thinking will become automatic.
The Close
The idea that almost broke me a few years ago would have survived if I’d asked harder questions before I committed.
I know that now. And I use AI differently because of it.
What changed wasn’t the technology — it was the intention I brought to the conversation. I stopped using AI as a mirror that reflects back what I already believe. I started using it as the honest advisor who cares enough to tell me where I’m wrong.
That kind of honesty is hard to find in real relationships. People who work for you don’t want to challenge your ideas. People who like you don’t want to hurt your feelings. People who compete with you aren’t going to help you get better.
AI will argue with you, completely, every time, without ego or agenda — if you ask it to.
Most entrepreneurs reading this have used AI hundreds of times and asked it to confirm their thinking. I’m asking you to try something different. Run your next big idea through a devil’s advocate session. Ask AI to tell you where you’re wrong.
Then take what survives and build it.
Jonathan Mast is an AI business strategist and the founder of White Beard Strategies, where he helps entrepreneurs use AI to make better decisions — not faster wrong ones. He works with business owners who want to build smarter, not just busier.