When AI becomes just a marketing gimmick
Last month, I reviewed a pitch deck from a “cutting-edge” logistics platform that featured a 3D-rendered AI avatar shaking hands with a truck driver. The entire presentation relied on auto-generated slides that vaguely referenced “predictive analytics” without ever mentioning data sources, error margins, or how the system actually made decisions. Here’s the kicker: the “AI” was just a repackaged version of their 2018 route optimization tool, with a neural network veneer slapped on by a third-party vendor. This isn’t an isolated case. Research shows that 60% of businesses now claim to use AI, yet independent audits (per a 2025 Gartner report) reveal only 12% demonstrate more than superficial integration. The problem isn’t that AI can’t deliver-it’s that companies treat it like a marketing gimmick to distract from underwhelming products. I’ve seen AI washing companies thrive precisely because they exploit the public’s trust in “innovation” without ever delivering substance.
How to spot AI washing companies
The best way to separate hype from substance is to demand specificity. Here’s what to watch for:
– “AI will revolutionize your business” without explaining how. If a vendor can’t break down which algorithms they’re using, what data they train on, or how decisions are validated, walk away. At a client’s request, I once evaluated a “self-optimizing pricing AI” for a retail chain. The vendor’s entire response was a 10-slide PowerPoint with no whitepapers, no sample outputs, and not a single error rate disclosed. Their definition of “optimizing” turned out to be simply “suggesting price changes based on competitor listings”-no machine learning involved.
– Jargon as a substitute for substance. Terms like “deep learning” or “transformer models” mean nothing unless paired with concrete examples. I’ve seen startups use “neural networks” to describe a simple lookup table. Ask for case studies-not abstract diagrams.
– No transparency about trade-offs. Every AI system has limitations. A healthcare AI I investigated claimed 98% accuracy for diagnosing fractures-but when I asked about false positives in elderly patients, the vendor pivoted to “our AI is constantly learning.” The reality? The “learning” was just a monthly data dump from radiologists, with no algorithmic improvements.
Why AI washing works-and how to resist it
Companies engage in AI washing because the incentives are stacked in their favor. Investors reward hype over execution, customers demand “AI” like it’s a product feature (not a capability), and regulators lag behind. A client I worked with spent $800,000 on a “predictive maintenance” system for industrial equipment. The vendor’s sales team framed it as a “next-gen AI platform,” but the actual product was a rule-based alert system with a chatbot interface. The “AI” was just a premium subscription fee for a feature that could’ve been coded in Excel. The vendor’s CTO later admitted to me: *”We knew 90% of buyers don’t understand the difference, so we let them imagine it.”*
Yet there’s a counter-move: reverse-engineer their claims. Demand three things upfront:
1. A sample output-not a demo. Show me the raw data, the model’s reasoning, and how a human would verify it.
2. A failure case. Every AI system has edge cases. Ask: *What happens when the model is wrong?*
3. A cost-benefit analysis-not just ROI. How does this compare to manual methods? Where does the AI save time *and* improve quality?
Building an AI-savvy culture
The antidote to AI washing isn’t skepticism-it’s informed curiosity. Start by treating every “AI solution” like a black box and demanding to open it. I’ve seen the most robust implementations come from teams that ask:
– *”Who’s in the loop when the AI makes a mistake?”* (If no one is, it’s not AI-it’s an automated spreadsheet.)
– *”Can I see the model’s confidence scores?”* (A vendor who hesitates is hiding something.)
– *”What’s the worst-case scenario if this fails?”* (If they can’t answer, they’ve already failed you.)
Moreover, the best protections come from diverse perspectives. At a recent workshop, I led a team of engineers and domain experts to evaluate a “generative AI” tool for contract drafting. The engineers focused on training data; the lawyers spotlighted bias risks; the compliance team asked about audit trails. Together, they exposed that the tool’s “AI-generated clauses” were often regurgitated from public filings-with no legal review. AI washing thrives in silence, but scrutiny breaks it down.
The next wave of accountability
The trend toward third-party AI audits is promising-but it won’t happen fast enough. In my experience, the most reliable signal of a vendor’s integrity is their reaction to hard questions. Do they provide whitepapers? Do they admit limitations? Do they offer refunds for false claims? If not, you’re dealing with an AI washing company.
For now, the onus is on buyers to demand transparency. The most interesting part of any AI system isn’t the hype-it’s the unvarnished reality of how it works. And if a vendor can’t handle that scrutiny? They’re not selling AI. They’re selling smoke.

