Five Keys to Successful AI Transformation
What I've seen work—and not—across hundreds of companies, case studies, and reports
I’m in the midst of a two-week summer vacation, yet find myself in similar conversations as at work. Everyone (disclosure: especially me) wants to talk about AI. When those people include organizational leaders, a recurring question is: How do we transform our organization for the AI era?
I’ve spent 25 years in tech, the past 8 in AI, and the past 1.5 translating AI market and technology research into transformation initiatives and experiments as EVP, Generative AI at Klick Health (this article reflects my opinions and not necessarily Klick’s). Through my work with life science companies and exposure to other industries via networking groups, I’ve seen hundreds of examples of what works and what doesn’t. I’ve also closely read related research reports and case studies.
What follows are five key actions that I’ve seen consistently drive success. They aren’t complicated, but companies get them wrong all the time.
Establish executive leadership with urgency
AI transformation doesn’t succeed when you push it down to a mid-level working group that meets every six weeks. The pace of change is too fast, and decisions too impactful. What works is when the CEO and senior executives own the agenda and meet with weekly urgency (or even more frequently when warranted).
At Klick, we have an AI steering committee that includes our CEO, COO, and other senior leaders. We meet weekly to discuss AI progress, challenges, and opportunities. Each week, to align the organization around AI top-to-bottom, we also meet with different department heads to discuss AI adoption and application in their groups. We make decisions on the spot. Because AI evolves daily, not quarterly, this speed matters.
The alternative approach—delegating to lower-level groups with little authority and less frequent meetings—leads to decisions lagging the pace of technological progress, and backlog purgatory. In one case, a client had a great idea for an AI initiative, but couldn’t present it to a mid-level working group for weeks, and was told that at best the idea would go in their AI project backlog. He simply decided not to pursue it; model progress during that delay could make it obsolete.
Democratize access to best-in-class tools
The next critical decision is: what AI tools will you let employees use?
Answer: give them access to best-in-class tools they already know and love. When companies roll out ChatGPT Enterprise, for example, adoption is rapid, because most employees already use ChatGPT in their personal life and many have been secretly using it for work. Enterprise versions of widely used tools like ChatGPT bring needed privacy and security, while keeping pace with cutting-edge models and features that get released weekly.
The wrong approach? Building internal chatbots. They significantly lag best-in-class tools in models and features. I see this repeatedly. For example, I once demonstrated OpenAI’s Deep Research to a company that used gpt-4o via an internal chatbot. They wanted to use Deep Research, but couldn’t. If you don’t give people access to the best tools, they often just use them in secret and so don’t share what they learn (see below for why sharing is so important).
It’s also important to facilitate easy experimentation with new and emerging AI models and products. Releases are constant, and discovering something new and powerful gives huge advantages if you beat a competitor to leveraging it. You can still ensure privacy and security, such as by having AI explorers who are explicitly allowed to try new tools without using any proprietary data.
Incentivize experimentation from the bottom up
Executives don’t know the nuances of every job function. The best use cases emerge from the people doing the work. That means you need to encourage and reward bottom-up experimentation.
At Klick, for example, we launched a $1 million AI prize. Employees submitted hundreds of ideas, and client judges chose the winners. This created energy, surfaced novel ideas, and produced tangible prototypes. One winning idea—Guardrail, a compliance automation tool—was so promising that we invested in building it into a full product.
There are other ways to provide incentives. Sometimes these can be carrots, like our contest. Sometimes they can be sticks, like making AI use a factor in performance reviews. Honestly, though, when people have access to great AI tools, and a culture that encourages their use, that’s usually the biggest incentive of all. People want to do better, faster, higher impact work, and are excited when they’re equipped and encouraged to do so, and see colleagues doing the same.
Facilitate sharing knowledge and applications
AI moves too quickly for static training materials. What works better is peer-to-peer sharing.
At Klick, Slack channels play a big role here. We have a main generative AI channel, and more specific ones such as for beginners and image generation. These have become vibrant hubs where people share use cases, examples, tools, and lessons.
Custom GPTs extend this sharing beyond words to packaged mini-apps. While these haven’t caught on in the consumer world, they’re exceptionally useful in enterprise settings. At Klick, we now have more custom GPTs (over 1,600) than employees. People create and share them for things like recurring tasks, such as converting project plans to written descriptions, and working on brands, such as by loading up prescribing information for clients’ drugs.
The key takeaway here is that with the fast pace of AI progress, communities beat classrooms. I haven’t seen any workshops or online training materials that can keep pace with AI developments, but when knowledge spreads virally, people stay current, and adoption accelerates.
Scale proven use cases into enterprise solutions
Finally, to go beyond individual use cases, you need to identify the best grassroots ideas to scale into enterprise solutions.
As mentioned earlier, Guardrail at Klick is one example. What began as an employee idea became a prize-winning prototype, then a funded and commercialized product.
This is one area where proactive AI leadership plays a critical role. When leaders seek use cases that show signs of success and have room to scale, they can immediately direct investment to take them to the next level.
Done right, this drives continuous innovation: leadership vision → broad experimentation → shared learning → scaled products. Ethan Mollick refers to this as leadership, crowd, and lab. Companies that master this transform faster than competitors.
The difference between success and failure
I’ve seen this all quite consistently now, first-hand at Klick, within successfully transforming clients in life sciences, across industries via case studies and confidential presentations, and in research and reports from people like Mollick. Successful companies lean in with executive urgency, democratize access to the best tools, motivate experimentation, foster sharing, and scale big, impactful ideas.
Failures look very different: leadership that delegates AI to people without authority to make quick decisions, working groups that meet too infrequently, internal tools that can’t compete with those employees use in their personal lives, culture that discourages experimentation, and scalable innovation hidden by secret AI use.
It’s not complicated, but it requires a different approach to prior transformation initiatives, probably because the technology is so powerful, widespread, and rapidly improving. Those who get it right can transform their organizations at the same pace. Those who don’t risk being rapidly left behind.