Subtitle: There is an ocean of AI content from people who have never made payroll — and the market is starting to sort the practitioners from the commentators. Here is how to tell the difference.
I want to tell you about a conversation I had with a business owner last spring.
She had been following AI content creators for over a year. She had watched hours of YouTube videos, read dozens of LinkedIn posts, taken two courses, and tried to implement several “proven” AI strategies in her business.
Her results? Minimal. She felt behind. She felt like she was doing something wrong.
When I sat down with her and we went through what she had tried, the problem became obvious fast. The strategies she had learned were designed for a very specific type of business — content-first, audience-heavy, with relatively simple operations. Her business was different. She had a team. She had a complex client delivery process. She had financial cycles and operational constraints that the AI advice she had followed had never accounted for.
The advice wasn’t bad because the creators were dishonest. The advice was misapplied because the creators had never actually run her kind of business. They had learned AI in the context of solo content creation, not multi-person service operations, and they had taught what they knew — which wasn’t what she needed.
This is the credibility problem in the AI advice market. And it is bigger than most people want to admit.
Key Takeaways
- Over 71% of organizations regularly use generative AI, yet more than 80% report no measurable impact on EBIT — suggesting that most AI advice being followed isn’t translating into real business results.
- The AI content market has exploded with voices who are fluent in AI tools but lack the operational business experience to translate that fluency into advice that works in a real company with real stakes.
- Michael Hyatt’s core insight is accurate and important: most AI content is written by people who have never signed the front of a paycheck. The market is beginning to self-sort.
- The right filter for AI business advice isn’t “does this person know AI?” It’s “does this person know my kind of business?”
- Genuine AI credibility comes from documented results in real businesses with real operational complexity — not from follower counts or tool fluency.
The Problem: Expertise Inflation in the AI Space
Here’s how the AI content landscape developed over the last several years.
When generative AI hit the mainstream in late 2022 and through 2023, there was a land rush for attention. Anyone who could articulate what the tools did gained an audience quickly, because most people were genuinely confused and hungry for explanation. Early adopters with no business background became influencers because they were early, not because they were experts.
That’s not a criticism — it was a predictable dynamic. Being first and being best are two different things, but in the early days, being first looked a lot like being best.
The problem is that many of those early voices built large followings before the limitation of their context became apparent. They know AI deeply. They don’t know business deeply. And when those two things are combined in advice-giving, what you get is technically accurate but operationally naive guidance.
You can see it in the advice itself. It tends toward: “Use this prompt and get this output.” It rarely addresses: “How do you integrate this into a team workflow without creating chaos?” or “How do you measure whether this actually moved the needle on revenue?” or “What do you do when this fails in front of a client?”
Those are not prompt engineering questions. Those are business questions. And they require business experience to answer well.
Michael Hyatt, a bestselling author and former publisher who has spent decades in business leadership, has been pointed about this: there is a profound difference between knowing how to use AI and knowing how to deploy it in a business with stakes. His platform, AI Business Lab, is specifically designed for people who have made payroll, managed teams through crisis, and made bet-the-company decisions — because he believes that context is what makes AI advice actually useful.
He’s right.
The Evidence: Why Results Have Lagged
The data makes the credibility problem visible in stark terms.
Over 71% of organizations regularly use generative AI, according to current research. Yet more than 80% of those organizations report no measurable impact on EBIT — earnings before interest and taxes, a direct measure of business profit.
Think about that for a moment. The vast majority of businesses using AI regularly are not seeing it move the bottom line. They are using it more. They are not profiting more.
For every $1 invested in AI, companies see an average return of $3.70 — but that return concentrates heavily in organizations deploying AI across multiple integrated business functions. The ones running isolated experiments or following advice designed for different contexts see much less.
The skill gap is real. Nearly 43% of organizations report that insufficient leadership vision is holding back their AI initiatives. The OECD’s 2026 research shows a massive disparity: large enterprises use AI at a 52% adoption rate while small enterprises sit at just 17.4% — a gap of more than 34 percentage points that reflects not access, but capacity to translate AI potential into operational reality.
The advice most entrepreneurs are following was designed for a different type of user, in a different type of business, facing different constraints. And when it doesn’t work, they assume they’re doing something wrong — when the real problem is that they were given advice that was never designed to work for them.
The Solution: A Better Filter for AI Content
I’m not suggesting you abandon all AI content. I’m suggesting you filter more intentionally.
The question to ask of any AI educator, coach, or content creator is not “do they know AI?” Almost everyone claiming to teach AI knows the tools. The question is: “Do they know the kind of business I’m trying to run?”
Here’s a credibility framework I use:
Filter 1: Do they have a business they’re actively running? Not a course business, not a content business — a business with clients, a team, operational complexity, and real financial stakes. Can they point to decisions that could have cost them significant money if they were wrong?
Filter 2: Can they show a specific result in a specific business? Not “I’ve seen clients do well with AI” — a documented, specific case: what they used, what it replaced, what it produced, what changed on the balance sheet or in the team’s capacity.
Filter 3: Do they acknowledge where AI doesn’t work? If every AI strategy they share works perfectly in their telling, be skeptical. Real implementation always involves failure and adjustment. Practitioners share the failures. Commentators only share the wins.
Filter 4: Does their business context match yours? A solo creator’s AI advice and a seven-person service company’s AI advice are not the same. The scale, the team dynamics, the client relationship, the financial risk profile — these all change what works. Make sure the person teaching you has operated in your territory.
Practical Steps
Step 1: Audit your current AI content diet. Make a list of the AI educators and influencers you follow most closely. For each one, research their actual business background. Do they have operational experience at the scale and complexity of your business? If not, that’s useful information.
Step 2: Look for documented case studies, not hypotheticals. When evaluating advice, ask: “Where is the proof?” Not testimonials — documented results with specific details. Numbers. Timelines. Before and after comparisons.
Step 3: Start producing your own documented results. The most credible AI voice in your network will eventually be the one who consistently shows their work. Don’t just use AI — document what you tried, what happened, and what you learned. Share that honestly. That practice builds real credibility over time.
Step 4: Develop your own “has this person made payroll” heuristic. Before following AI advice in your business, ask: would this recommendation make sense to someone who has had to make hard operational decisions? If the answer is “only if everything goes according to plan,” it’s advice built in a vacuum.
Step 5: Seek out peer communities where practitioners compare notes. The most useful AI advice often comes not from content creators but from peer operators in similar business situations who are solving the same problems in real time. Find or build those communities.
Step 6: Test before you trust. Any AI strategy, no matter how credible the source, should be tested at small scale in your specific business before it gets allocated real resources. Your business context is different. Validate that the advice transfers before you bet on it.
Step 7: Be honest about what is and isn’t working in your own AI stack. The market is self-sorting. The practitioners who are transparent about failure are the ones building lasting trust. If you’re sharing AI advice in your own business — with clients, with your team, with your audience — the willingness to say “this didn’t work the way I thought” is what separates you from the noise.
Frequently Asked Questions
How do I evaluate an AI educator’s credibility quickly?
Look for two things: specificity and skin in the game. Specific results from specific businesses, and evidence that they have something to lose if their advice is wrong. Ask: have they made payroll? Have they made a decision that could have cost them significantly? If their entire career has been content creation, their AI advice may be good — but it was developed in a very specific context.
Is it possible for someone without traditional business experience to give good AI advice?
Yes, with important caveats. If someone’s expertise is specifically in AI systems design, prompt engineering, or technical architecture, their advice in those domains can be excellent. The gap appears when purely technical AI knowledge is packaged as business strategy advice. Know what type of expertise you’re drawing on.
I’ve already followed advice that didn’t work. How do I move forward?
Start by diagnosing the gap honestly: was the advice wrong, or was it designed for a different business context? Then look for practitioners whose operational context matches yours more closely, and test advice at smaller scale before committing full resources.
What makes AI advice trustworthy?
Documented results over time, honesty about failure cases, and a business background that matches the complexity of the advice being given. Trust is built by showing your work — the successes and the failures, the strategies that scaled and the ones that didn’t.
Should I still follow AI educators who don’t have traditional business experience?
Yes, selectively. Many excellent AI practitioners don’t come from traditional business backgrounds but have deep technical and implementation knowledge that’s genuinely useful. The filter isn’t “do they have a business background?” — it’s “does their experience match the domain they’re advising in?”
The Close
I’ve been in business long enough to watch several technology waves sweep through the entrepreneurial world. Each one followed the same pattern: early attention went to the people who explained the technology well. Lasting value came from the people who figured out how to use it in a real business and were honest about what they found.
AI is no different. The wave of attention has already peaked for the explainers. What’s emerging now — and what I’m watching happen in real time — is the market sorting for practitioners.
The entrepreneurs who will define the AI conversation in the next three to five years are not the ones who knew the most tools the earliest. They are the ones who went to work, documented what happened, were honest about the failures, and built genuine results in businesses with actual stakes.
You do not have to be the loudest AI voice. You have to be the most honest one in your space.
Build that. Document that. Share that.
The market is looking for practitioners. Make sure you’re one of them.
Jonathan Mast is an AI business strategist and entrepreneur with decades of experience building businesses with real teams, real clients, and real stakes. He founded White Beard Strategies to help entrepreneurs build AI systems that work in the real world — not just in theory.