O
Octo
CoursesPricing
O
Octo
CoursesPricingDashboardPrivacyTerms

© 2026 Octo

AI Strategy & Leadership
1The AI Landscape — What Today's AI Can and Can't Do2AI Strategy and Competitive Positioning3The Economics of AI — ROI Frameworks and Cost Structures4AI Risk and Governance — Regulation, Liability, and Responsible AI5Leading AI Teams6AI Transformation7Data Strategy8Future of Work
Module 2~20 min

AI Strategy and Competitive Positioning

Build an AI strategy that sequences initiatives so each one builds the capabilities needed for the next.

Imagine this...

It's Monday morning. You're the CEO of a 200-person software company. Your inbox has three AI proposals — each one promising to "transform the business." One costs $2 million and takes 18 months. Another costs $40,000 and takes 6 weeks. The third sits somewhere in between.

Your board wants answers by Friday. Your competitors just announced their own AI play. Your CTO says "we need to move fast." Your CFO says "we need to move carefully."

You're about to make a decision that will either launch your company ahead of every competitor — or burn millions of dollars and 18 months on something that never ships.

So... which proposal do you pick first?

Here's the secret most executives get wrong: the answer isn't about picking the best project. It's about picking the right order.


🔑AI strategy is business strategy
There is no such thing as a standalone "AI strategy." You have a business strategy — grow revenue in segment X by doing Y better than competitors. AI is one of the tools you might use to execute that strategy. Companies that treat AI as a separate initiative consistently underperform companies that ask "which of our strategic priorities can AI accelerate, and how?"

The #1 mistake executives make with AI

Most leaders who fall behind on AI don't lack ambition. They have too much ambition, executed in the wrong order.

Think about it like building a house. You wouldn't start with the roof, right? You need the foundation first. Then the walls. Then the roof. Each step builds on the one before it.

AI strategy works the same way. Sequence discipline — choosing the right order to execute initiatives — is what separates companies that build lasting AI advantage from those that spin their wheels.

💭You're Probably Wondering…

There Are No Dumb Questions

Q: "But what if my competitor is already building the expensive thing? Won't I fall behind?"

A: Here's the thing — most companies are throwing money at the same commodity AI features, and very few believe they're building sustainable advantages from it. Your advantage comes from sequencing, not spending. Your advantage comes from sequencing, not spending. A $40k quick win that ships in 6 weeks puts you ahead of a competitor whose $2M project is stuck in month 8.

Q: "Can't I just hire a bunch of AI engineers and do everything at once?"

A: You could try! But here's what actually happens: the expensive project needs data pipelines that don't exist yet, AI skills your team hasn't built yet, and proof that AI works in your company. Quick wins create all three of those things. Skip them, and your expensive project stalls — burning budget and credibility at the same time.


The magic quadrant: Your new best friend

Here's a simple tool that changes everything. Map each AI project on two axes:

  • How much effort does it take? (time, money, people)
  • How much impact does it deliver? (revenue, productivity, competitive edge)

Here's what this chart tells you at a glance:

QuadrantWhat goes hereWhat you do
Quick wins (top-left)Low effort, high impactStart here. These build skills, generate data, and prove ROI.
Strategic bets (top-right)High effort, high impactDo these second. They need the capabilities your quick wins create.
Low priority (bottom-left)Low effort, low impactNice-to-have. Do them if you have spare capacity.
Deprioritise (bottom-right)High effort, low impactAvoid these. They eat resources and deliver little.

The critical insight: "AI in core product" and "Custom LLM training" both sit in the high-impact row. But the effort gap between them is enormous — a 3-month competitive move versus an 18-month capital commitment. Sequencing them correctly defines your entire strategy.

⚡

Quick check

25 XP
You have two AI projects: Project A costs $30k and takes 4 weeks. Project B costs $500k and takes 10 months. Both have high impact. Using the quadrant framework, which quadrant does each project land in — and which do you start with? _Hint: Look at the time and cost for each project. The quadrant has a "low effort" row — what would you use as the threshold to decide whether something belongs there? Apply that threshold to both projects before deciding their quadrant._


The Sarah Chen story: From near-disaster to AI powerhouse

Sarah Chen had a problem.

As CEO of Cascade Software — a 200-person B2B SaaS company — she walked into her Q1 board meeting with three AI proposals on the table. The board was watching. Competitors were moving. She had one shot to get the sequence right.

Proposal 1: The Big Shiny Thing Train a custom LLM on Cascade Software's six years of customer contract data. Cost: $2M+. Timeline: 12–18 months. Problem: Cascade Software didn't even have an ML team. This was the equivalent of trying to build a rocket before learning to ride a bicycle.

Proposal 2: The Quick Win Nobody Was Excited About Build AI search over internal documentation. Cost: $40k. Timeline: 6–8 weeks. Impact: every single employee finds answers faster, new hires onboard in half the time, and those three "human encyclopedias" on the team stop getting interrupted 50 times a day.

Proposal 3: The Competitive Weapon Add AI to the core product. Cost: $150k. Timeline: 3 months. Impact: something customers would actually see, use, and pay more for.

The turning point

Sarah had been ready to greenlight the $2M custom LLM. It sounded impressive. The board would love it. Her CTO was excited about it.

Then she mapped all three proposals on the quadrant chart. Right there in the boardroom. In real time.

And suddenly, everyone could see it: $2M and 18 months sitting in the top-right corner... right next to a $40k, 6-week win that could start tomorrow.

That single visual saved Cascade Software $2 million in premature spending.

What happened next

TimelineWhat shippedResult
Week 6Internal AI search goes liveSupport tickets drop 31% in 60 days (illustrative). Employees stop bugging the "human encyclopedias."
Month 4AI-powered product feature launchesBecomes the #1 cited reason for a 12-point NPS increase (illustrative) the following quarter.
2027 roadmapCustom LLM gets properly scheduledNot cancelled — just sequenced. Now Cascade Software has the team, data, and skills to actually pull it off.

Each initiative built the capability and data the next one required. The portfolio compounded instead of producing isolated experiments with no throughline.

💭You're Probably Wondering…

There Are No Dumb Questions

Q: "Sarah's story makes it sound like you should never build a custom LLM. Is that true?"

A: Not at all! Custom LLMs can be incredibly powerful — when you're ready for them. The question isn't "should we build one?" It's "do we have the team, the data pipelines, and the proven AI wins to make this succeed?" If the answer is no, sequence something else first. Sarah didn't kill the custom LLM. She moved it to 2027 when Cascade Software would actually be ready.

Q: "What if my quick win fails?"

A: That's actually one of the best arguments for starting with quick wins! If a $40k, 6-week project fails, you've lost very little. You've learned a ton about what doesn't work — and you can course-correct fast. If a $2M, 18-month project fails? That's a career-defining disaster.

⚡

Spot the pattern

25 XP
Look at Sarah's three proposals again. What specific capability did the internal search project (Proposal 2) create that made the core product AI feature (Proposal 3) possible? _Hint: Think about what the company gained beyond just the search tool itself — people, skills, data, and confidence._


Build vs. buy: The decision that trips everyone up

Your CTO walks in and says: "We should build our own AI model from scratch."

Sounds exciting, right? But here's the reality check:

FactorBuild custom modelUse commodity APIs (OpenAI, Anthropic, etc.)
Cost$1M–$10M+$10k–$100k/year
Timeline12–24 months2–8 weeks
Team neededML engineers, data scientists, infrastructure1–2 developers
When it makes senseYou have proprietary data and workflow logic that encodes decision patterns competitors cannot access or replicateYou need standard AI capabilities (summarization, search, chat)
RiskHigh — might not outperform commodity modelsLow — proven, well-supported, continuously improving

Here's the uncomfortable truth: GPT-4-class capabilities are now available through open-weight models. Frontier performance gaps close within 12–18 months. The AI model itself is becoming a commodity. What's not a commodity? Your proprietary data, your unique workflows, and how deeply you integrate AI into your specific business processes.

So the real question isn't "should we build or buy the model?" It's "where does our actual competitive moat live?"

⚡

Build or buy?

25 XP
Your company processes insurance claims. You have 15 years of claims data with adjuster decisions, notes, and outcomes that no competitor has access to. Your CTO proposes building a custom claims-processing AI. Using the build vs. buy framework above, does this qualify as a good "build" case? Why or why not? _Hint: Check the "When it makes sense" row. Does 15 years of unique adjuster decision data count as proprietary workflow logic competitors can't replicate?_

Where do we compete? Which markets and customer segments are we targeting? AI amplifies where you already have advantage — it doesn't create advantage where you have none.
What's our moat? Proprietary data, unique customer relationships, regulatory expertise? These create defensible AI applications. Generic use of public AI APIs is not a moat.
Where do we use AI internally vs externally? Internal: productivity, cost reduction, speed. External: customer-facing features, new products. Different risk profiles, different governance.
How do we build the capability? Hire, retrain, or partner? Each has speed/cost/risk tradeoffs. Most companies will do all three.

The three-horizon board pitch

When you present AI investments to your board, don't lump everything together. Frame it across three horizons — each one speaks a different language:

HorizonTimeframeHow to frame itExample
Horizon 1Now – 6 monthsEfficiency and ROI. "This saves us $X per quarter."Internal AI search, support deflection
Horizon 26–18 monthsStrategic positioning. "This differentiates our product."AI in core product, smart workflows
Horizon 318+ monthsNew business models. "This opens a market we can't reach today."Custom LLM, AI-native product lines

The mistake most executives make: they pitch everything as Horizon 3 transformation to sound bold. But boards want to see a ladder — proof that each step funds and enables the next one.

💭You're Probably Wondering…

There Are No Dumb Questions

Q: "My board only cares about Horizon 3 moonshots. They think quick wins are boring."

A: Reframe it! Quick wins aren't boring — they're proof. Tell the board: "We shipped AI search in 6 weeks, cut support tickets by 31%, and the team that built it is now ready for the bigger play." That's not boring. That's a track record. Boards fund track records, not PowerPoint promises.

Q: "How do I know if something is Horizon 2 vs. Horizon 3?"

A: Simple test — can your current team and infrastructure build it within 18 months? If yes, Horizon 2. If it requires capabilities, data, or teams you don't have yet, Horizon 3. And remember: today's Horizon 3 becomes next year's Horizon 2 after you've shipped a few quick wins.

⚡

Pitch practice

25 XP
You're presenting to your board next week. You have three AI projects: (1) AI-powered email triage that saves each salesperson 45 minutes/day, (2) an AI feature in your product that competitors don't have, (3) a plan to build an entirely AI-native product line for a new market. Assign each project to the correct horizon and write one sentence for each that explains it in the language that horizon uses (efficiency, strategic positioning, or new business model).


Back to the CEO scenario

That Monday morning inbox with three proposals and a Friday deadline doesn't have to be a coin flip. The CEO who mapped all three on the effort-impact quadrant — right there in the room — turned a pressure situation into a clear sequence: the $40k internal search tool first, the core product AI feature second, the custom LLM when the team and data pipelines were actually ready for it. The framework didn't pick the most exciting proposal. It picked the order that made each next step possible. Six weeks later, the first win shipped, support tickets dropped 31%, and the team that built it now had the skills and credibility to take on the bigger bet. When the board asked for an update, the CEO didn't present a PowerPoint full of promises — she presented a track record. That's what AI strategy looks like when sequencing discipline replaces ambition.

Key takeaways

  • Start with quick wins. They're not just "easy" — they build internal AI capability, generate real user data, and prove value before you commit to bigger bets.
  • Check your assumptions on custom models. Every time you consider custom model training as a first move, ask: do we have the team, timeline, and budget? Most companies don't — and commodity APIs deliver 80% of the value at 5% of the cost.
  • Sequence, sequence, sequence. Your AI portfolio should work like a chain — each initiative builds the capabilities the next one requires. A list of isolated experiments isn't a strategy. A compounding roadmap is.

The big challenge

⚡

Challenge

50 XP
Beacon is a 150-person B2B procurement software company. Their CEO has three AI proposals on his desk: - Train a custom LLM on 10 years of procurement contract data (est. 14 months, $1.8M, needs ML team) - Build AI search over their 200-page internal knowledge base (est. 5 weeks, $25k, one engineer) - Rebuild their core RFP analysis feature with AI (est. 10 weeks, $120k, product + 2 engineers) 1. Place each initiative on the impact × effort quadrant. Use this rule: high impact = transforms a core workflow; low effort = under 3 months and under $150k. 2. Write out the recommended execution sequence — first, second, third — and give one sentence of reasoning for each. 3. The CEO is excited about the custom LLM. State the single strongest argument against starting with it in Year 1. _Hint: For each initiative, look at both axes independently: how much effort (time + cost + people) and how much impact (does it transform a core workflow, or just make it nicer)? Plot each one, then sequence by quadrant — which quadrant always goes first? And what's the single most important question the CEO should answer about the custom LLM before committing to it?_


Quiz time

?

Knowledge Check

1.McKinsey's State of AI research consistently finds that while AI adoption has reached a large majority of organizations surveyed, only a small minority are achieving disproportionate returns as 'AI high performers' (McKinsey State of AI 2024 — check the most recent edition for current figures). What primarily explains that gap?

2.Your CTO recommends building a proprietary LLM from scratch. Under which condition would this decision most clearly create competitive advantage rather than waste capital?

3.What has changed in 2025–2026 that makes proprietary data and workflow integration more defensible than the AI model itself?

4.How should the framing of AI investment shift across a three-horizon portfolio strategy when presenting to the board?

Previous

The AI Landscape — What Today's AI Can and Can't Do

Next

The Economics of AI — ROI Frameworks and Cost Structures