AI Transformation
Why AI transformation fails when technology is installed but the operating model doesn't change, and the exec decisions that make it work.
The Pattern Every Executive Recognises
Here's a scenario you may already know firsthand. Company buys an AI tool. Announcement goes out. Some teams experiment. Six months later, the tool works perfectly — and 90% of the company has gone back to their usual way of working.
It's the same story as every enterprise software rollout that didn't stick. The technology isn't the problem. The operating model is. Installing technology without changing how you work is just expense. Think of it like a treadmill nobody runs on — your legs work fine, the machine works fine, the missing piece is the decision to change the routine. No amount of upgrading the treadmill fixes that.
This is the single biggest reason AI transformations fail. Not bad technology. Not bad people. Bad operating models — the habits, rules, and decisions that govern how work actually gets done.
The Shopify Story: From Chaos to AI-Native in 18 Months
In 2023, Shopify's CEO Tobi Lutke looked across his company and saw something familiar to every executive reading this: AI experiments everywhere, results nowhere.
Teams were scattered across the building doing their own thing with ChatGPT. No shared tools. No shared measurements. No way to tell what was working. It was like an orchestra where every musician picked their own song — lots of noise, no music.
Lutke did something radical. He didn't buy better AI. He didn't hire a fleet of data scientists. He made three decisions that changed how the company operates:
Decision 1: "Justify AI before you justify a new hire." Every team had to explain why AI couldn't do the job before requesting headcount. Overnight, AI went from "nice-to-have side project" to "the first option we consider."
Decision 2: "No AI ships without automated quality tests." Before any AI feature reached a customer, it had to pass an eval gate — think of it like a driving test before you get the keys. This killed the "move fast and break things" mentality that sinks AI projects.
Decision 3: "Every product spec includes AI metrics from day one." Not "we'll measure later." Day one.
(These specific policy details are illustrative of Shopify's reported AI-first approach — only the headcount mandate has been publicly confirmed via Lutke's internal memo; verify other details against Shopify's published materials.)
The results: Shopify's CEO mandated AI consideration before any new hires, and (illustrative figure — verify against Shopify's published 2024 earnings materials) over 80% of Shopify engineers adopted AI coding assistants. AI features appeared on every major product surface. Support ticket deflection improved materially. All within roughly 18 months of those three decisions.
Here's the punchline: every competitor had access to the same AI models. Shopify's edge came from the mandate, not the model.
There Are No Dumb Questions
Q: Does this mean I need to be as bold as Shopify's CEO to succeed? A: No. Lutke's moves look dramatic in hindsight, but each one was a simple policy change. "Consider AI before hiring" is a one-sentence update to your hiring request form. You don't need to be bold — you need to be specific.
Q: What if my company is way smaller than Shopify? A: Smaller is actually easier. Fewer teams to coordinate, fewer legacy processes to untangle. A 50-person company can move through all four transformation stages in half the time.
Q: What if I'm not the CEO? A: You need the CEO's visible support, but you don't need to be the CEO. The critical requirement is that the mandate comes from the top. If you're a VP, your job is to make the case upward and get that mandate issued.
Spot the Pattern
25 XPThe Four Stages of AI Transformation
Think of AI transformation like learning to cook. Stage 1 is microwaving leftovers. Stage 4 is running a restaurant. You can't skip stages — the chef who never learned to boil water can't run a kitchen.
Notice something? Every bottleneck on this roadmap sits at an exec decision node. The technology at each stage largely exists. Companies stall because the mandate above it never arrived.
Here's the full breakdown:
| Stage | What It Looks Like | Cooking Analogy | The Exec Unlock That Gets You Here |
|---|---|---|---|
| 1. Experimenting (0-6 mo) | Random teams using ChatGPT. No shared tools. No one knows what's working. | Microwaving leftovers — anyone can do it, nobody's coordinating, results vary wildly. | Define one metric to track. Pick a single number (e.g., hours saved per week) and make every team report it the same way. |
| 2. Systematising (6-18 mo) | First internal AI tools. Shared standards emerge. Teams start comparing notes. | Following recipes — you have a cookbook, ingredients list, and consistent results. | Mandate AI consideration in all new feature specs. Before any feature is designed, the team must answer: "Could AI do part of this?" |
| 3. Scaling (18-36 mo) | AI in the core product. Competitive advantage visible. Customer-facing AI features. | Running a food truck — real customers, real stakes, but still a small operation you're refining. | AI eval in CI/CD. Every PM tracks AI metrics. Automated quality testing before anything ships. AI performance is a first-class metric. |
| 4. AI-Native (36+ mo) | Continuous improvement loop. Data flywheel spinning. AI is how the company thinks. | Running a restaurant — the kitchen, menu, suppliers, and staff all designed around cooking, not adapted from something else. | AI is assumed, not added. Every new initiative starts with "how does AI fit?" not "should we add AI?" |
A stalled organisation almost certainly lacks a mandate, not a model. Each stage builds on the previous one's infrastructure — there is no shortcut to AI-native.
✗ Without AI
- ✗Start with the technology, not the problem
- ✗Pilot without measuring
- ✗Skip change management
- ✗Centralise all AI in IT
- ✗Wait for the perfect data strategy
✓ With AI
- ✓Start with a painful measurable business problem
- ✓Set baseline before pilots
- ✓Invest in change management and training
- ✓Embed AI champions in each business unit
- ✓Start with the data you have
Where Are You?
25 XPWhy "Just Installing AI" Doesn't Work
Let's make this vivid.
Imagine a hospital that buys a world-class MRI machine. They wheel it into a supply closet. Doctors still diagnose by gut feel. Nurses don't know how to read the scans. Nobody changed the patient intake form to route cases to the MRI. The billing system can't code MRI procedures.
The MRI works perfectly. The hospital doesn't use it.
That's what "installing technology without changing how you work" looks like. And it happens at company after company with AI:
- The CRM team buys an AI lead-scoring tool. Sales reps ignore the scores because their commission structure still rewards cold-calling volume, not AI-prioritised leads.
- The support team deploys an AI chatbot. But the escalation policy wasn't updated, so every bot conversation gets escalated to a human anyway — creating more work, not less.
- The finance team gets AI-powered forecasting. The CFO still trusts the spreadsheet she's used for 10 years. The AI forecast sits in a dashboard nobody opens.
In every case, the technology was fine. The operating model — the rules, incentives, and habits that govern how work actually happens — never changed.
There Are No Dumb Questions
Q: So is the technology irrelevant? A: Not at all. You need good tools. But good tools without process change produce zero results. Process change without tools produces limited results. You need both — but if you had to pick which to invest in first, pick the process change every time. It's free and it's the bottleneck.
Q: How do I know if I'm "just installing" vs. actually transforming? A: Ask this question: "Has any team changed how they make decisions because of AI?" If the answer is no, you're installing. If yes, you're transforming.
Q: What's the fastest way to move from installing to transforming? A: Pick one team. Change one process. Measure one number. That's it. Don't try to transform the whole company at once. Transform one workflow, prove it works, then use that proof to get the mandate for the next one.
Spot the Failure
25 XPThe Transformation Playbook: What Execs Actually Control
Here's the uncomfortable truth: every time you leave AI adoption to the CTO alone, it stalls. Technology leaders can build the tools, but cultural change requires the CEO to visibly mandate it.
Think of it like a school principal. A math teacher can be brilliant, but if the principal doesn't put math on the schedule, doesn't buy textbooks, and doesn't make grades count — nobody learns math. The CTO is the math teacher. The CEO is the principal.
What execs actually control that matters:
| Decision | Why It Matters | What Happens Without It |
|---|---|---|
| Who approves AI features? | Clear ownership prevents the "nobody's responsible" problem | Features launch without quality checks, fail publicly, erode trust |
| How are AI outcomes measured? | Consistent metrics let you compare across teams and kill what isn't working | Every team invents their own success criteria; nothing is comparable |
| What does "done" look like? | A clear production bar prevents permanent pilot syndrome | 95% of pilots never reach production; budgets evaporate |
| When does AI get considered? | Mandating early consideration prevents AI being bolted on as an afterthought | AI arrives late in the process when it's too expensive to integrate properly |
You can accelerate every stage by deciding these four things upfront. Technology without that process change is just expense.
Draft the Memo
25 XPBuilding Capability: Start Internal, Then Go External
You can build organisational AI capability fastest by starting with internal tools. Why? Because internal tools let teams learn to work with AI before it reaches customers — where the stakes are higher.
Think of it like a pilot learning to fly. You don't start with a transatlantic flight full of passengers. You start in a simulator. Then a small plane. Then bigger planes. Then passengers.
Internal tools are your flight simulator. If the AI makes a mistake on an internal report, someone catches it and fixes it. If the AI makes a mistake on a customer's insurance claim, you're in the newspaper.
The progression looks like this:
- Internal productivity tools (low risk, fast learning) — AI that helps employees draft emails, summarise meetings, extract data from documents
- Internal decision-support tools (medium risk, building trust) — AI that recommends actions but a human always decides
- Customer-facing features with human oversight (higher risk, proven capability) — AI that drafts responses for customers, reviewed by a human before sending
- Autonomous customer-facing AI (highest risk, earned trust) — AI that acts independently because you've built the eval infrastructure to trust it
There Are No Dumb Questions
Q: Can't we skip straight to customer-facing AI if we use a really good model? A: The model might be great, but your organisation hasn't learned how to manage AI yet. Who reviews AI outputs? What's the escalation path when AI is wrong? How do you measure quality over time? You need to answer these questions on low-stakes internal tools before betting your customer relationships on them.
Q: Our competitors are already shipping AI features to customers. Are we falling behind? A: Maybe. But shipping bad AI to customers is worse than shipping no AI. Companies that rush customer-facing AI without internal infrastructure often end up pulling features back after public failures — which sets them back further than if they'd built the foundation first.
Pick the First Tool
25 XPNow Apply It: The Northstar Insurance Challenge
You've learned the stages, the exec unlocks, and the "start internal" principle. Now put it all together on a real scenario.
Challenge
50 XPBack to the Company With the Treadmill
Remember that company from the opening — the one where 90% of employees went back to their old way of working after the AI tool arrived? They didn't have a technology problem. They had an operating model problem. The treadmill worked perfectly. Nobody ran on it.
Shopify had access to the same models as every competitor. What Shopify had that the treadmill company didn't: three decisions that changed how work gets approved, tested, and specified. Not better AI — a different operating model.
The treadmill doesn't make you fit. The decision to change your routine does.
Key Takeaways
- Treadmill rule: Installing technology without changing how you work is just expense. The operating model — not the software — determines whether AI hits the P&L or stays in the pilot budget.
- Mandate, not model: Every time you leave AI adoption to the CTO alone, it stalls. Cultural change requires the CEO to visibly mandate it. Shopify's edge came from three process decisions, not better AI.
- Start internal: Build organisational AI capability fastest by starting with internal tools, so teams learn to work with AI before it reaches customers where the stakes are higher.
- Four decisions: Who approves AI features, how they're measured, what "done" means, and when AI gets considered. Decide these upfront and every stage accelerates.
Knowledge Check
1.Two years into an AI transformation, 95% of your pilots haven't reached production. What is the most likely systemic cause, and what is the first leadership intervention?
2.A vocal group of senior employees is resisting AI adoption, citing fears about job security. What is the most effective leadership response?
3.What is the clearest structural marker that distinguishes an 'AI-native' organization from one that is merely 'doing AI'?
4.Your board wants a progress report on AI transformation. Which set of metrics should you present, and which should you deliberately omit?