Something strange is happening with AI. People are furious about it and addicted to it at the same time.
ChatGPT has 800 million weekly active users. It's the fifth most-visited website on the planet. And yet only 8% of Americans say they'd pay extra for AI features in the software they already use. Articles about being "drowning in AI features I never asked for" go viral every week. Comment sections fill with engineers and product people venting about AI being shoved into every product they touch.
Those two facts seem contradictory. They're not. They're the most important signal in B2B SaaS right now, and almost nobody is reading it correctly.
People use AI when they choose it. They resent it when you choose it for them. #
That distinction sounds obvious. But I don't think most SaaS companies have internalized it. I've spent the past year talking to B2B product teams, and nearly every one of them shipped an AI feature in that time. Smart search. Copilot-style assistants. "Chat with your data." AI-generated summaries. Sometimes all four.
Almost none of them will tell you the adoption numbers.
A VP of Product at a mid-market project management tool told me their AI assistant had a 4% weekly active usage rate six months after launch. Three quarters of engineering time went into building it. The sales team uses it as a checkbox on RFPs. Customers don't touch it.
When I asked why they built it, the answer was honest: "Our competitor announced one. Our board asked when we'd have ours. We had six months."
That's a panic decision dressed up as a product decision. And it's happening everywhere.
What most SaaS companies are actually shipping #
There's a term I keep coming back to for what most SaaS companies are shipping: checkbox AI. It exists to satisfy an investor narrative, close an RFP, match a competitor's press release. It shows up in your product whether customers need it or not, gets bundled into pricing tiers to justify increases, and hits 4% adoption before getting quietly sunsetted eighteen months later.
Microsoft raised 365 prices 22%, from $18 to $22/user/month, to force Copilot uptake. Customers only found out there was a "classic" plan when they tried to cancel. Google Workspace did the same thing. One person I talked to put it bluntly: "I don't want a companion. Especially not a fake AI buddy. I never asked for this."
A lot of people read the backlash as "AI doesn't have product-market fit." That's wrong. 800 million weekly users proves people want AI. What they don't want is AI chosen for them by someone optimizing for a board deck.
The adoption numbers nobody publishes #
I started asking every SaaS company I talked to the same question: what's the WAU on your AI feature?
Most wouldn't answer. The ones that did painted the same picture. A customer success platform shipped AI-generated call summaries. 6% adoption after four months. A logistics SaaS added AI route optimization suggestions. 11% click-through, 2% action rate. An HR platform launched an AI policy Q&A bot. Usage spiked for two weeks, then flatlined at 3%.
These are real teams that spent real quarters building features that real customers don't use.
But there's one AI feature I keep hearing people actually praise, unprompted. Confluence's AI search. A head of IT at a 200-person company told me: "I used to dig through 40 pages trying to find how we handle PTO requests for contractors. Now I type the question and it gives me the answer from our own docs."
Why does Confluence work? It has context. It knows your company's actual documentation. It answers questions about how your specific organization operates, not generic questions about the world.
That's the pattern. Vendor-chosen AI annoys. Workflow-specific AI sticks. And almost nobody in the current AI backlash discourse has said it that clearly.
This is an information problem, not a product management problem #
I think the reason runs deeper than bad product management. It's an information problem.
Hayek wrote about this in 1945, explaining why central economic planning fails. The knowledge required to make good decisions "never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess."
Product teams are central planners. They try to predict what 1,300 customers need. They can't. The knowledge is too distributed and too embedded in each customer's actual work. A roofing company's workflow looks nothing like a hospital's compliance process. One product team in San Francisco cannot know all of this.
So they do what central planners always do: build for the average. The 80% use case. The common denominator. The remaining 20%, the part that reflects how each team actually operates, falls into a gap nobody can fill. Feature requests go into a backlog. "Maybe Q3. Maybe never."
When that product team ships an AI feature, they're central-planning the AI too. They decide everyone needs a chatbot, or a summarizer, or a copilot. But the maintenance director at a concrete plant doesn't need a chatbot. He needs a drum rotation tracker. And the defense contractor doesn't need AI summaries. She needs a ServiceNow connector so her technicians can stay in one interface instead of switching between two systems corporate mandated.
The product team can't know this. The customers do. Which means the only AI features that reliably hit high adoption are the ones customers build themselves.
What happens when you let customers build their own AI features #
It turns out there's data for this.
A maintenance management platform tried the opposite approach. Instead of building AI features for their customers, they gave customers a way to build their own. No prescribed features. No AI assistant with a name and a personality. Just: describe what you need, and AI builds it.
1,168 users across 751 companies created 631 applications. They launched them 7,600 times. 90.8% activated without anyone telling them to. The top apps hit 89-92% adoption across their user base.
Same AI underneath. The difference was who decided what to build.
A concrete manufacturer. Three users, four apps. Drum rotation tracking, plant uptime calculations, cumulative meter readings, open work order views. Every one built around how concrete manufacturing actually works, not how a generic maintenance platform thinks it should work. 89% adoption.
I talked to one of those users, a guy named Cameron who works at a concrete plant. He'd been trying to build his own purchase order system from scratch, complete with APIs, web servers, authentication tokens, because the software he was paying for couldn't surface the view he needed. When he got tools to build it himself, he stayed six hours overtime. He got addicted to seeing it come together.
The gap between checkbox AI and what Cameron built is a knowledge gap. The product team couldn't have known Cameron needed drum rotation tracking. Cameron did.
The checkbox AI death spiral #
The standard objection is that these AI integrations are just "poorly thought through," and with better product management companies could ship AI features people actually want. That's partly right. But I think it's more specific than that. The integrations are poorly thought through because they can't be well thought through. No product team can think through what 1,300 different customers need. The knowledge doesn't exist at the center. It exists at the edges, at Cameron's concrete plant, at the hospital compliance office, at the fleet operator's dispatch desk.
A product owner at a large company described the internal dynamic to me: "All these companies are in panic mode. Many of these folks are not thinking clearly and have no idea what they're doing. It's all theatre for their investors coupled with a fear of being seen as falling behind. You're not even allowed to say it's a bad idea."
That's the checkbox AI death spiral. Board pressure leads to a panic feature. Low adoption leads to more board pressure. More board pressure leads to more panic features. The only way out is to stop being the bottleneck.
Customization is going to zero #
There's a pattern in economics where every decade or so, something that used to be expensive becomes basically free, and markets reorganize around the new cost structure.
In the 2000s, storage went to zero. That created YouTube and Instagram, because suddenly anyone could upload infinite photos and video. In the 2010s, distribution went to zero. That created Netflix and Spotify, because suddenly anyone could reach millions of people.
Right now, customization is going to zero.
It used to take months and $100k to customize enterprise software. You needed engineers who understood APIs, data models, auth systems. Cameron was literally building his own system because hiring someone to customize it was too expensive.
Now you describe what you need in English and AI builds it in minutes for fifty cents.
That changes the entire argument about AI features. The debate right now is "should companies force AI features on users?" The real question is whether companies should be building AI features for users at all, or giving users the tools to build their own.
Every SaaS product covers roughly 80% of what any given customer needs. That's what justifies the subscription. The remaining 20% is where churn lives. No SaaS company can build for it because their roadmap has to serve the majority. Custom development costs too much. Consultants leave behind brittle code nobody maintains.
But when customization costs collapse to zero, that 20% gap doesn't have to stay empty. The customers who know exactly what they need can fill it themselves. When they do, adoption numbers look like 90%, not 4%.
The renewal conversation you're about to have #
The renewal conversation is coming. A CFO looks at a $50,000 annual SaaS contract. They see the AI features on the invoice, the ones your team spent three quarters building, the ones that justified the 15% price increase. Then they pull up the adoption dashboard.
4% WAU.
They don't blame the feature. They blame the tool. That contract is on the chopping block, not because your product is bad, but because the most visible, most expensive part of it sits unused.
You can't win that conversation with a better AI roadmap. You win it with utilization numbers. One account we tracked went from under 35% platform utilization to over 70% after their CS team started helping customers build their own apps. Same product. Same price. The tool just finally fit how people actually work.
The companies that survive the next five years won't be the ones with the most impressive AI demos. They'll be the ones where the renewal conversation never comes up, because their customers are too busy using tools they built themselves.
See Gigacatalyst in Action
We build the AI platform layer that sits on top of B2B SaaS products and lets customers create their own workflow apps. Connected to real data. Governed by your security model.
