AI Strategy and Competitive Positioning
Build an AI strategy that sequences initiatives so each one builds the capabilities needed for the next.
Imagine this...
It's Monday morning. You're the CEO of a 200-person software company. Your inbox has three AI proposals — each one promising to "transform the business." One costs $2 million and takes 18 months. Another costs $40,000 and takes 6 weeks. The third sits somewhere in between.
Your board wants answers by Friday. Your competitors just announced their own AI play. Your CTO says "we need to move fast." Your CFO says "we need to move carefully."
You're about to make a decision that will either launch your company ahead of every competitor — or burn millions of dollars and 18 months on something that never ships.
So... which proposal do you pick first?
Here's the secret most executives get wrong: the answer isn't about picking the best project. It's about picking the right order.
The #1 mistake executives make with AI
Most leaders who fall behind on AI don't lack ambition. They have too much ambition, executed in the wrong order.
Think about it like building a house. You wouldn't start with the roof, right? You need the foundation first. Then the walls. Then the roof. Each step builds on the one before it.
AI strategy works the same way. Sequence discipline — choosing the right order to execute initiatives — is what separates companies that build lasting AI advantage from those that spin their wheels.
There Are No Dumb Questions
Q: "But what if my competitor is already building the expensive thing? Won't I fall behind?"
A: Here's the thing — most companies are throwing money at the same commodity AI features, and very few believe they're building sustainable advantages from it. Your advantage comes from sequencing, not spending. Your advantage comes from sequencing, not spending. A $40k quick win that ships in 6 weeks puts you ahead of a competitor whose $2M project is stuck in month 8.
Q: "Can't I just hire a bunch of AI engineers and do everything at once?"
A: You could try! But here's what actually happens: the expensive project needs data pipelines that don't exist yet, AI skills your team hasn't built yet, and proof that AI works in your company. Quick wins create all three of those things. Skip them, and your expensive project stalls — burning budget and credibility at the same time.
The magic quadrant: Your new best friend
Here's a simple tool that changes everything. Map each AI project on two axes:
- How much effort does it take? (time, money, people)
- How much impact does it deliver? (revenue, productivity, competitive edge)
Here's what this chart tells you at a glance:
| Quadrant | What goes here | What you do |
|---|---|---|
| Quick wins (top-left) | Low effort, high impact | Start here. These build skills, generate data, and prove ROI. |
| Strategic bets (top-right) | High effort, high impact | Do these second. They need the capabilities your quick wins create. |
| Low priority (bottom-left) | Low effort, low impact | Nice-to-have. Do them if you have spare capacity. |
| Deprioritise (bottom-right) | High effort, low impact | Avoid these. They eat resources and deliver little. |
The critical insight: "AI in core product" and "Custom LLM training" both sit in the high-impact row. But the effort gap between them is enormous — a 3-month competitive move versus an 18-month capital commitment. Sequencing them correctly defines your entire strategy.
Quick check
25 XPThe Sarah Chen story: From near-disaster to AI powerhouse
Sarah Chen had a problem.
As CEO of Cascade Software — a 200-person B2B SaaS company — she walked into her Q1 board meeting with three AI proposals on the table. The board was watching. Competitors were moving. She had one shot to get the sequence right.
Proposal 1: The Big Shiny Thing Train a custom LLM on Cascade Software's six years of customer contract data. Cost: $2M+. Timeline: 12–18 months. Problem: Cascade Software didn't even have an ML team. This was the equivalent of trying to build a rocket before learning to ride a bicycle.
Proposal 2: The Quick Win Nobody Was Excited About Build AI search over internal documentation. Cost: $40k. Timeline: 6–8 weeks. Impact: every single employee finds answers faster, new hires onboard in half the time, and those three "human encyclopedias" on the team stop getting interrupted 50 times a day.
Proposal 3: The Competitive Weapon Add AI to the core product. Cost: $150k. Timeline: 3 months. Impact: something customers would actually see, use, and pay more for.
The turning point
Sarah had been ready to greenlight the $2M custom LLM. It sounded impressive. The board would love it. Her CTO was excited about it.
Then she mapped all three proposals on the quadrant chart. Right there in the boardroom. In real time.
And suddenly, everyone could see it: $2M and 18 months sitting in the top-right corner... right next to a $40k, 6-week win that could start tomorrow.
That single visual saved Cascade Software $2 million in premature spending.
What happened next
| Timeline | What shipped | Result |
|---|---|---|
| Week 6 | Internal AI search goes live | Support tickets drop 31% in 60 days (illustrative). Employees stop bugging the "human encyclopedias." |
| Month 4 | AI-powered product feature launches | Becomes the #1 cited reason for a 12-point NPS increase (illustrative) the following quarter. |
| 2027 roadmap | Custom LLM gets properly scheduled | Not cancelled — just sequenced. Now Cascade Software has the team, data, and skills to actually pull it off. |
Each initiative built the capability and data the next one required. The portfolio compounded instead of producing isolated experiments with no throughline.
There Are No Dumb Questions
Q: "Sarah's story makes it sound like you should never build a custom LLM. Is that true?"
A: Not at all! Custom LLMs can be incredibly powerful — when you're ready for them. The question isn't "should we build one?" It's "do we have the team, the data pipelines, and the proven AI wins to make this succeed?" If the answer is no, sequence something else first. Sarah didn't kill the custom LLM. She moved it to 2027 when Cascade Software would actually be ready.
Q: "What if my quick win fails?"
A: That's actually one of the best arguments for starting with quick wins! If a $40k, 6-week project fails, you've lost very little. You've learned a ton about what doesn't work — and you can course-correct fast. If a $2M, 18-month project fails? That's a career-defining disaster.
Spot the pattern
25 XPBuild vs. buy: The decision that trips everyone up
Your CTO walks in and says: "We should build our own AI model from scratch."
Sounds exciting, right? But here's the reality check:
| Factor | Build custom model | Use commodity APIs (OpenAI, Anthropic, etc.) |
|---|---|---|
| Cost | $1M–$10M+ | $10k–$100k/year |
| Timeline | 12–24 months | 2–8 weeks |
| Team needed | ML engineers, data scientists, infrastructure | 1–2 developers |
| When it makes sense | You have proprietary data and workflow logic that encodes decision patterns competitors cannot access or replicate | You need standard AI capabilities (summarization, search, chat) |
| Risk | High — might not outperform commodity models | Low — proven, well-supported, continuously improving |
Here's the uncomfortable truth: GPT-4-class capabilities are now available through open-weight models. Frontier performance gaps close within 12–18 months. The AI model itself is becoming a commodity. What's not a commodity? Your proprietary data, your unique workflows, and how deeply you integrate AI into your specific business processes.
So the real question isn't "should we build or buy the model?" It's "where does our actual competitive moat live?"
Build or buy?
25 XPThe three-horizon board pitch
When you present AI investments to your board, don't lump everything together. Frame it across three horizons — each one speaks a different language:
| Horizon | Timeframe | How to frame it | Example |
|---|---|---|---|
| Horizon 1 | Now – 6 months | Efficiency and ROI. "This saves us $X per quarter." | Internal AI search, support deflection |
| Horizon 2 | 6–18 months | Strategic positioning. "This differentiates our product." | AI in core product, smart workflows |
| Horizon 3 | 18+ months | New business models. "This opens a market we can't reach today." | Custom LLM, AI-native product lines |
The mistake most executives make: they pitch everything as Horizon 3 transformation to sound bold. But boards want to see a ladder — proof that each step funds and enables the next one.
There Are No Dumb Questions
Q: "My board only cares about Horizon 3 moonshots. They think quick wins are boring."
A: Reframe it! Quick wins aren't boring — they're proof. Tell the board: "We shipped AI search in 6 weeks, cut support tickets by 31%, and the team that built it is now ready for the bigger play." That's not boring. That's a track record. Boards fund track records, not PowerPoint promises.
Q: "How do I know if something is Horizon 2 vs. Horizon 3?"
A: Simple test — can your current team and infrastructure build it within 18 months? If yes, Horizon 2. If it requires capabilities, data, or teams you don't have yet, Horizon 3. And remember: today's Horizon 3 becomes next year's Horizon 2 after you've shipped a few quick wins.
Pitch practice
25 XPBack to the CEO scenario
That Monday morning inbox with three proposals and a Friday deadline doesn't have to be a coin flip. The CEO who mapped all three on the effort-impact quadrant — right there in the room — turned a pressure situation into a clear sequence: the $40k internal search tool first, the core product AI feature second, the custom LLM when the team and data pipelines were actually ready for it. The framework didn't pick the most exciting proposal. It picked the order that made each next step possible. Six weeks later, the first win shipped, support tickets dropped 31%, and the team that built it now had the skills and credibility to take on the bigger bet. When the board asked for an update, the CEO didn't present a PowerPoint full of promises — she presented a track record. That's what AI strategy looks like when sequencing discipline replaces ambition.
Key takeaways
- Start with quick wins. They're not just "easy" — they build internal AI capability, generate real user data, and prove value before you commit to bigger bets.
- Check your assumptions on custom models. Every time you consider custom model training as a first move, ask: do we have the team, timeline, and budget? Most companies don't — and commodity APIs deliver 80% of the value at 5% of the cost.
- Sequence, sequence, sequence. Your AI portfolio should work like a chain — each initiative builds the capabilities the next one requires. A list of isolated experiments isn't a strategy. A compounding roadmap is.
The big challenge
Challenge
50 XPQuiz time
Knowledge Check
1.McKinsey's State of AI research consistently finds that while AI adoption has reached a large majority of organizations surveyed, only a small minority are achieving disproportionate returns as 'AI high performers' (McKinsey State of AI 2024 — check the most recent edition for current figures). What primarily explains that gap?
2.Your CTO recommends building a proprietary LLM from scratch. Under which condition would this decision most clearly create competitive advantage rather than waste capital?
3.What has changed in 2025–2026 that makes proprietary data and workflow integration more defensible than the AI model itself?
4.How should the framing of AI investment shift across a three-horizon portfolio strategy when presenting to the board?