Stop Building Internal Chatbots
With AI products improving rapidly and employees using them in secret, companies that build instead of buy risk slowing teams and increasing risks
A few months ago, I spoke with a pharma exec who had a great idea for an AI pilot. It was clever and feasible. We could prototype it in days as a custom GPT within our ChatGPT Enterprise instance.
He was energized. So he talked to IT. They told him to present the idea to their internal AI working group, which met every few weeks. If approved, the idea would go to their enterprise IT outsourcing partner’s backlog to be scoped, built, and maintained. Best case? A pilot in three to six months.
Demoralized, he decided to not even bother.
This story—variations of which I see frequently—isn’t just about bureaucracy. It’s about how enterprise habits, and particularly the reflex to build everything in-house, are now actively inhibiting AI progress. I’ve seen smart ideas with potential big impact stall not because they’re flawed, but because organizations insist on reinventing what already exists.
The problem is simple: companies build AI products when they should buy. They overestimate control, underestimate complexity, and lose time to the impossible task of keeping pace with dedicated AI product companies. And in the race to realize AI’s value, that delay is one of the costliest risks of all.
Outdated rationales
Why build when you can buy?
Some of the behavior is historical. In the early days of ChatGPT (after all, its success surprised even OpenAI), concerns about AI labs training on your data were legitimate—enterprise protections didn’t yet exist. But today, the big labs offer robust enterprise products with no training on data and strong privacy and security controls. (See for yourself at OpenAI’s trust portal and Anthropic’s.)
Some of it’s political. IT consulting firms often pitch internal chatbot builds because they get paid to deliver them. These pitches tend to stoke fear, uncertainty, and doubt about external tools. While the best consultants help companies adopt world-class tools when warranted, opportunistic consultants encourage companies to do it themselves—build and maintenance costs be damned.
And some of it’s structural. IT teams are understandably nervous about employees using tools they can’t govern or support. Centralizing access feels safer, and often stems from a well-intentioned desire to ensure security and compliance. But as we’ll see, this just leads to secret use of popular AI products, and the impossible task of IT teams trying to keep pace with accelerating AI progress.
Falling behind
A few weeks ago, I presented to a group of marketers and IT leads at a global life sciences company. They had their own internal chatbot, which until recently had kept pace with many features of public ones like ChatGPT.
At that meeting, I showed the group Deep Research, the agentic research tool only available in ChatGPT, not OpenAI’s API. I showed then GPT-4o image generation, not yet in the API at that time either. I showed them o3, available in the API, but not yet incorporated into their internal chatbot. The marketers were excited about the new capabilities—then disappointed to know they couldn’t access them internally, with IT having no timeline as to when they could.
So employees do what they’ve always done when blocked: they find a workaround. Increasingly, they just use external AI products anyway, but don’t tell anyone. Ethan Mollick identified these “secret cyborgs” as far back as March 2023, and they’re still here. In a recent survey, for example, one-third of workers using generative AI tools at work said they kept that use a secret. Of those, 36% liked the secret advantage—an advantage they get by using AI products more sophisticated than those offered internally.
Higher costs and risk
But wait, isn’t it cheaper to build chatbots internally, and just pay for API calls based on usage? And isn’t that a good reason to build versus buy? I’ve heard this objection before, like: “We can’t license ChatGPT Enterprise for everyone. It’s too expensive!”
That argument falls apart under scrutiny.
First, building and maintaining internal tools isn’t free. You pay IT teams, or consultants, to scope, develop, and manage them. Those costs are often hidden, but significant.
Second, API costs are only cheaper if you’re not getting much use or you consistently don’t take advantage of the best models. For example, as of this writing, OpenAI’s o3 model costs $40 per million output tokens via the API—about what you’d use in one large project. That’s about the cost of an Enterprise license with volume discounts, and more than a Team license.
Third, there’s real opportunity cost when competitors get access to better models and features before your IT team can catch up (if even possible—Deep Research is still not in OpenAI’s API). For example, the jump on scientific reasoning (GPQA) from OpenAI’s GPT-4o to o1 was from 49% to 62%, meaning overnight, ChatGPT users got access to a way smarter model.
Finally, the risks that companies try to mitigate by building internal chatbots may actually worsen them. They drive employees to secretly use external tools with better features and functionality, but less built-in data protection. You can manually turn off training on your data in ChatGPT’s personal versions. In Enterprise, that’s the default.
What works instead
In my experience, the winning strategy looks like this: give people access to best-in-class tools. Encourage experimentation. Facilitate sharing. And scale what’s successful. I’ve seen this be successful repeatedly, including at Klick, where we rolled out ChatGPT Enterprise companywide, launched a $1 million prize for AI ideas to encourage experimentation, and scaled great ideas like a compliance solution into standalone products.
So, instead of doubling down on increasingly untenable DIY projects, consider this instead:
Give your teams access to the best tools. You won’t beat OpenAI, Anthropic, or Google on product velocity. Don’t try. License their tools and build from there.
Encourage open experimentation. Let people tinker. Remove fear. Normalize sharing and iterating on new use cases.
Invest in scaling what works. When something gains traction, needs scaling, but can’t be scaled with licensed tools, then—and only then—invest in a proprietary solution.
I hope that next time someone in your organization has a great idea and a way to test it with off-the-shelf tools, the answer is “yes!”
Note: The opinions above are my own and not necessarily those of my employer, Klick Health.