Leading AI Teams
How to design the org structure that lets AI teams deliver, and what exec decisions unlock each maturity stage.
Org design kills more AI programs than bad models
Here's a secret nobody in the AI hype cycle wants you to hear: the reason most AI initiatives fail has nothing to do with AI. It's the org chart. It's who reports to whom. It's whether the VP of Product and the VP of Engineering had lunch last Tuesday.
Think about it this way. Imagine you're 10 years old and you want to open a chain of pizza restaurants. You've got the world's greatest pizza recipe (that's your AI model). But if you try to open 50 locations on day one without a central kitchen, without a supply chain, without training manuals — you'll have 50 locations making 50 different pizzas, burning through cash, and probably giving someone food poisoning.
That's exactly what happens with AI teams. Competitors don't pull ahead because they hire better engineers. They pull ahead because they structure their teams to ship.
Your job as an executive isn't to understand transformers or fine-tuning. Your job is to build the restaurant franchise system — then let the chefs cook.
The 4-stage pizza franchise (a.k.a. AI maturity model)
Every company moves through four stages. Think of them like growing a pizza franchise from a single oven in your garage to a national chain.
Notice the arrows. Each one says "Exec decision" — not "hire" or "buy tool." You are the bottleneck and the unlock. Let's break each stage down.
The maturity stages at a glance
| Stage 1: Ad-hoc | Stage 2: Centralised | Stage 3: Embedded | Stage 4: AI-native | |
|---|---|---|---|---|
| Pizza analogy | Garage oven, no recipe book | Central kitchen built, supply chain running | Franchise locations open, using HQ recipes | Every location innovates on the menu, quality stays high |
| What it looks like | Engineers experiment solo with ChatGPT | AI platform team with shared tooling | AI engineers sit inside product teams | AI baked into the core product loop |
| Who owns AI | Nobody (or everybody — same thing) | Platform team | Product teams (platform supports) | The whole company |
| Cost visibility | Zero — buried in individual budgets | Fully tracked on one dashboard | Per-team dashboards from the platform | Per-feature cost in CI/CD (the automated release pipeline) |
| Compliance | Hope and prayers | Platform-enforced controls | Inherited from platform automatically | Continuous eval in the pipeline |
| Biggest risk | Shadow AI sprawl | Bottleneck — teams wait months | Fragmentation if platform wasn't built first | Over-engineering, gold-plating |
| Exec decision to advance | "Hire an AI platform lead and set standards" | "90-day embedding plan, maintain the platform" | "Eval is a product requirement, not post-launch" | You're cooking with gas |
Challenge
25 XPThere Are No Dumb Questions
Q: Do I really need to go through all four stages in order? Can't we skip ahead?
No. And we have the disaster stories to prove it (keep reading). Skipping Stage 2 is like franchising your pizza restaurant before you have a recipe book or supply chain. You'll end up with 14 locations using 14 different cheeses, three food poisoning incidents, and a very angry board of directors.
Q: We already have engineers using AI. Doesn't that mean we're past Stage 1?
Using AI and governing AI are different things. If you don't know how much you're spending, who's using what, or whether customer data is hitting third-party APIs — you're still in Stage 1 no matter how many engineers have Copilot.
Q: Why is this the exec's problem? Can't engineering just figure it out?
Engineering can build the platform. Engineering cannot decide to reorganize itself. The transition from centralised to embedded requires moving people across team boundaries — that's an org design decision, and org design is your job.
Disaster theater: when stages go wrong
Company A: The stage 2 trap (a.k.a. "we built a kitchen but never opened any restaurants")
Company A built a beautiful centralised AI platform. State-of-the-art tooling. Gorgeous dashboards. One problem: they stayed there for two years.
The central team became a bottleneck so massive that product teams waited six months for a single AI feature. The intake queue grew to 47 tickets. Engineers started calling the AI platform team "the Department of No." Meanwhile, three competitors shipped AI-powered features and stole 12% market share.
The CEO's post-mortem said "technology failure." The real diagnosis? An org design decision killed it. They built the central kitchen but refused to open franchise locations. The chefs sat in HQ perfecting recipes that never reached a customer.
Challenge
25 XPCompany B: The stage skipper (a.k.a. "we opened 50 restaurants with no recipe book")
Company B looked at Company A's slow pace and said "not us." They jumped from Stage 1 straight to Stage 3 — embedding AI engineers into product teams without building a platform first.
The results were spectacular. Spectacularly bad.
- 14 different LLM providers in production (fourteen!)
- 3 PII incidents — customer data hitting unvetted third-party APIs
- No shared cost dashboard — the CFO discovered $180k/month in AI API spend buried across 23 different credit cards
- Zero reusable components — every team built their own prompt templates, their own logging, their own everything
The board found out about the PII incidents from a journalist. Speed without structure doesn't land on your roadmap — it lands on the board agenda as a liability.
Think about it: 50 pizza restaurants, each buying their own flour from whoever, no health inspections, no recipe standards. One of them serves raw chicken. Now it's on the evening news and all 50 locations have a problem.
Challenge
25 XPQ: Company B moved fast. Isn't that what we're supposed to do?
Moving fast without a platform isn't speed — it's chaos with a deadline. Real speed comes from Stage 3 after Stage 2: embedded engineers ship fast because the platform handles compliance, logging, and cost tracking automatically. Company B's engineers actually spent more time reinventing infrastructure than building features.
The hero's journey: Priya's 90-day plan
Priya Anand, VP of Engineering at a 400-person fintech, watched Company A and Company B crash in real time. She had both their post-mortems pinned to her office wall.
Her situation: Stage 2, with a ticking clock. 90 days until a major contract renewal that required demonstrable AI capabilities in the product. Lose the contract, lose 30% of revenue. No pressure.
Month 1: The audit that changed everything (Days 1–30)
Priya started with a question that made her engineering leads uncomfortable: "Show me every AI API call we make and every dollar we spend on it."
The answer was worse than she expected:
- 6 different LLM providers already in production (she'd only approved 2)
- $47,000/month in AI API spend scattered across individual engineering budgets — invisible to finance
- Prompt templates saved in 4 different Notion workspaces, Slack threads, and one engineer's personal GitHub repo
The exec decision: One approved provider. One dashboard. Hard cutover deadline — 30 days, no exceptions.
Result after Month 1: $47k/month dropped to $31k/month. Not because they cut capability — because they eliminated duplicate providers, negotiated volume pricing, and stopped paying for 6 different rate limits.
Months 2–3: The embedding sprint (Days 31–90)
With the platform locked down, Priya made her second exec decision: embed AI engineers into the top 3 product teams. Not all teams — just the three closest to customer-facing features for the contract renewal.
She built a shared API wrapper — a single internal connection point that all teams use to call the AI service — with logging baked in. Every API call, every token, every cost automatically tracked. Compliance checks ran on every request. No team had to think about it.
The turning point: Week 6. The first embedded engineer shipped a customer-facing feature in 4 days. The centralised team had estimated it at 8 weeks.
4 days vs. 8 weeks. Same engineer. Same skills. Different org structure.
That single data point gave Priya the internal proof to accelerate the remaining embeddings. Within 6 months: feature velocity ran 3x faster, and the fintech won the contract renewal.
Challenge
25 XPThe Priya playbook: what to steal
| Phase | Duration | Key action | Measurable outcome |
|---|---|---|---|
| Audit | Days 1–30 | Find all AI usage, pick one provider, set cutover deadline | Cost visible on one dashboard, spend reduced |
| Platform lock | Days 15–30 | Build shared API wrapper with logging & compliance | Every API call tracked automatically |
| Embed | Days 31–90 | Move AI engineers into top 3 product teams | First feature shipped by embedded engineer |
| Prove | Day 45+ | Use first win as proof point to accelerate | Internal buy-in for remaining embeddings |
The pattern that always holds
Centralisation builds the platform. Embedding delivers the velocity. Skip either phase and you pay — in schedule, in compliance exposure, or in both.
Think back to pizza. The central kitchen (Stage 2) creates the recipes, the supply chain, the quality standards. The franchise locations (Stage 3) serve the customers. You need both. In that order.
Challenge
25 XP✗ Without AI
- ✗Output = headcount × productivity
- ✗Hire specialists for each function
- ✗Training measured in courses completed
- ✗Success = tasks completed
✓ With AI
- ✓Output = (headcount × productivity) × AI leverage
- ✓Hire generalists who use AI to specialise
- ✓Training measured in AI literacy and judgment
- ✓Success = decisions made well
Try it
Challenge
50 XPBack to Priya
The 90-day clock ran out. Priya walked into the contract renewal meeting with a working demo — a customer-facing AI feature that an embedded engineer had shipped in four days.
The client's procurement lead leaned forward. "Your competitors said they'd need eight weeks to build something like this. You did it in four days?"
"Same engineer," Priya said. "Different org structure."
They signed. Priya kept the post-mortems of Company A and Company B pinned to her wall.
Key takeaways
- Build the kitchen before you franchise. Use the centralised AI team to build the shared platform — but once it's built, push AI engineers into every product team. Don't keep them central.
- Skipping stages is not speed — it's debt. Every time you embed AI engineers before the platform layer is ready, you create fragmentation, cost chaos, and compliance gaps. Just ask Company B.
- You are the unlock. Each maturity stage advances with a single exec decision — you don't need to wait for a new hire, a new tool, or a new budget cycle. The decision comes first; everything else follows.
One more thing: If you remember nothing else from this module, remember this — the 4-day vs. 8-week number from Priya's story. Same engineer, same skills, different org structure. That's the power of getting the org design right. That's your job.
Knowledge Check
1.You're hiring your first Chief AI Officer. Your organization has several siloed data teams but limited production AI deployment. Which background profile is most important for this hire?
2.Your centralized AI CoE (Centre of Excellence — a dedicated team that owns shared AI tooling and standards) has shipped eight models in two years, but business unit adoption is below 30%. What org design change addresses this most directly, and what is the primary trade-off?
3.An AI initiative is six months in with no production deployment. Which root cause is most likely, and what is the executive's most effective intervention?
4.How should you evaluate an AI product manager's performance differently from a traditional PM?