O
Octo
CoursesPricing
O
Octo
CoursesPricingDashboardPrivacyTerms

© 2026 Octo

AI Strategy & Leadership
1The AI Landscape — What Today's AI Can and Can't Do2AI Strategy and Competitive Positioning3The Economics of AI — ROI Frameworks and Cost Structures4AI Risk and Governance — Regulation, Liability, and Responsible AI5Leading AI Teams6AI Transformation7Data Strategy8Future of Work
Module 1~20 min

The AI Landscape — What Today's AI Can and Can't Do

Understand where your company sits in the AI value chain and which layer is actually defensible.

The $2 Million Mistake

Dana Park, Meridian Analytics, and Crestline are fictional composites illustrating real API deprecation risks that have affected companies across the industry.

In January 2024, a CEO named Dana Park got an email that made her stomach drop.

OpenAI was killing GPT-3 (specifically the legacy GPT-3 base Completions API models). Done. Gone. Her entire product — the thing her company Meridian Analytics had spent two years building — was wired directly into that model like Christmas lights plugged into a single outlet. When OpenAI pulled the plug, the lights went dark.

Her engineering team spent the next three months — and roughly $800,000 in salary — just keeping the product alive. Not improving it. Not shipping new features. Just surviving.

Meanwhile, across town, a competitor called Crestline got the same email. Their CTO, Marcus Webb, read it, shrugged, changed one line in a config file, and switched to a different AI model in a week. Their product actually got better. Customer satisfaction jumped 12 points.

Same industry. Same disruption. Wildly different outcomes.

What did Crestline know that Meridian didn't? They understood where in the AI stack to place their bets. And that's exactly what you're about to learn.


The Restaurant Supply Chain (Your New Mental Model)

Imagine you're opening a restaurant. You need:

  • A farm that grows the ingredients (lettuce, tomatoes, beef)
  • A food distributor that packages and delivers those ingredients
  • Kitchen tools — ovens, blenders, pans
  • Your recipes, your brand, your relationship with diners

Now here's the thing: any restaurant can buy tomatoes from the same farm. Any restaurant can order the same oven from the same catalog. Those things are commodities — everyone has access to the same stuff, so they don't make you special.

What makes your restaurant special? Your secret sauce. Your grandmother's recipe. The fact that your regulars walk in and you already know their order. The experience you create. That's what people pay for, and that's what a competitor cannot just go buy.

The AI world works exactly the same way.

Let's meet each layer as if it were a character in a story.

2017The Transformer paper

Attention Is All You Need — Vaswani et al. (Google Brain/Google Research, 2017). The foundation of every modern LLM.

2020GPT-3 launches

175B parameters. Step-change in LLM scale — substantially more reliable long-form text generation. Opened the API era for AI product development.

2022ChatGPT

100M users in ~2 months — among the fastest consumer product adoptions on record at the time.

2023The race heats up

GPT-4, Claude, Gemini, LLaMA — every major tech company commits. $90B+ invested (widely estimated, various analyst sources).

2024Multimodal and reasoning

Models that see images, write code, and reason. AI enters the enterprise mainstream.

2025Agentic AI

AI systems that take multi-step actions autonomously. The next wave of business transformation.


Meet the Four Layers

Layer 1: The Farm (Infrastructure)

Who lives here: NVIDIA (makes the specialized chips), and the big cloud providers — AWS, Azure, Google Cloud.

What it does: Provides the raw computing power that AI needs to think. Thousands of specialized chips crunching numbers at incredible speed. Without this layer, nothing else exists.

Personality: Reliable, unglamorous, increasingly cheap. Think of it as electricity — absolutely essential, but you don't compete on who has the best power grid. You just plug in.

Can you build a moat here? Almost certainly not, unless you're literally NVIDIA. These services are becoming commodities. You can rent them from multiple providers, and prices keep falling.

Layer 2: The Ingredients (Foundation Models)

Who lives here: Anthropic (Claude), OpenAI (GPT), Google (Gemini), Meta (Llama).

What it does: These are the "brains" — massive AI models trained on enormous amounts of text and data. They can write, reason, analyze, and create. They're the ingredients your application uses.

Personality: Flashy, fast-improving, and constantly being replaced by something better. Today's best model is next quarter's second-best model.

Can you build a moat here? Dangerous territory. If you hitch your wagon to one specific model, you're betting that this model stays the best forever. (Spoiler: it won't.)

Layer 3: The Kitchen Tools (Developer Platforms)

Who lives here: LangChain, Vercel AI SDK, HuggingFace — tools that make it easier to connect your application to AI models.

What it does: Provides the plumbing. Makes it easier to send data to a model, get results back, and do useful things with those results.

Personality: Helpful, ever-changing, somewhat interchangeable. Like kitchen appliances — you want good ones, but you can swap a blender brand without reinventing your recipes.

Can you build a moat here? Not really. These tools are open-source or widely available. Your competitor uses the same ones.

Layer 4: The Restaurant (Your Application Layer)

Who lives here: You. Your prompts, your proprietary data, your user experience, your domain expertise, your customer relationships.

What it does: This is where the magic happens — where raw AI capabilities get shaped into something your customers specifically love. This includes:

  • RAG (Retrieval-Augmented Generation) — feeding your private data into the AI at query time so it gives answers specific to your business
  • Your UX — how the AI experience actually feels to your users
  • Your proprietary data — years of customer behavior, industry knowledge, and domain expertise that no one else has

Personality: Unique, hard to copy, gets stronger over time. Like a restaurant's reputation — every happy customer makes the next one more likely.

Can you build a moat here? YES. This is the only layer where your investment compounds. A competitor can switch to the same AI model you use overnight. They cannot replicate your data, your user relationships, or your domain expertise overnight.


⚡

Layer Spotter

25 XP
For each item below, identify which layer of the AI stack it belongs to (1 = Infrastructure, 2 = Foundation Models, 3 = Developer Platforms, 4 = Application): 1. A company's five years of customer purchase history, used to personalize AI recommendations 2. An NVIDIA H100 GPU chip 3. The LangChain framework for building AI pipelines 4. OpenAI's GPT-4o model _Hint: Ask yourself — "Could a competitor just go buy this, or is it unique to this company?"_


The Comparison Table: Where Should You Invest?

Layer 1: InfrastructureLayer 2: Foundation ModelsLayer 3: Developer PlatformsLayer 4: Application
Restaurant analogyThe farmThe ingredientsThe kitchen toolsYour recipes + brand
ExamplesNVIDIA, AWS, AzureClaude, GPT, GeminiLangChain, HuggingFaceYour data, UX, prompts
DefensibilityVery lowLowLowHigh
Commoditizing?FastFastModerateSlow (compounds)
Switching costLowLow (if designed right)LowHigh (your advantage)
Your strategyRent itUse it, don't marry itPick good toolsInvest heavily here
100MChatGPT users in ~2 months (Reuters/UBS, Jan 2023)
200B+AI investment globally in 2024, $ (global AI infrastructure + software investment — scope varies by analyst; figures range widely depending on whether hardware capex is included — verify against Goldman Sachs or IDC current reports)
92%Fortune 500 using OpenAI products (OpenAI self-reported claim, 2024 — not independently verified; based on broad definition of 'use')

Note on the 92% figure: OpenAI claimed in 2024 that 92% of Fortune 500 companies use its products. This figure is self-reported by OpenAI and has not been independently verified.


Three Companies, Three Strategies, One Winner

Let's return to our three companies — but this time, let's see the full picture.

Meridian Analytics: "We Built on Quicksand"

Dana Park's team at Meridian built their product like a restaurant that only knows how to cook with one specific brand of tomato. Their code was tightly coupled to GPT-3 — meaning their software depended on that exact model's behavior, its specific output format, even the quirky way it named data fields.

When OpenAI deprecated GPT-3 in early 2024? Their entire kitchen collapsed. Three months of rebuilding. $800K burned. Six features behind their roadmap by Q2.

Their mistake: They invested in Layer 2 instead of Layer 4. They built on top of the ingredients instead of building great recipes.

Volta Intelligence: "We Grew Our Own Tomatoes"

(Volta Intelligence is a fictional illustrative example.)

Volta went the opposite direction. They said, "We'll grow our own tomatoes!" — meaning they trained their own AI model from scratch on internal data. The initial investment: $2 million (illustrative). Ongoing cost: $400K per quarter in compute and ML engineer salaries.

Then GPT-4o launched in May 2024 and outperformed Volta's custom model on every benchmark their customers actually cared about. Suddenly Volta was the restaurant growing expensive organic tomatoes while the grocery store next door started selling ones that tasted better for a fraction of the price.

Their mistake: They invested in Layer 2 (trying to own the ingredients) when they should have invested in Layer 4 (the recipes and dining experience).

Crestline: "We Perfected the Recipes"

Crestline was the restaurant that said: "We don't care which farm our tomatoes come from. We care about our recipes, our regulars, and our reputation."

They built on top of the API (a standardized connection to the AI model — think of it as a universal power adapter) with a proprietary data layer — five years of user behavior signals and deep domain knowledge baked into their retrieval and ranking systems.

When Claude 3.5 Sonnet outperformed GPT-4 on their benchmark tasks, CTO Marcus Webb swapped models in one week. The product got better. NPS jumped 12 points (illustrative). Their competitive advantages were untouched — because those advantages lived in Layer 4, not Layer 2.

Their lesson for you: Invest in the application layer. Make the model a swappable part. Build your moat from what competitors cannot buy off the shelf.


⚡

Diagnose the Strategy

25 XP
A retail company has spent $3M building a custom AI model trained on their product catalog to power search recommendations. A newer foundation model just launched that handles product search better out of the box. 1. Which company from the stories above does this retail company most resemble? (Meridian / Volta / Crestline) 2. What should they have spent that $3M on instead? _Hint: Think about what the retail company actually owns that's unique. Is it the model, or is it the product catalog data and customer behavior patterns?_


💭You're Probably Wondering…

There Are No Dumb Questions

Q: "If foundation models keep getting better for free, why invest in AI at all? Can't we just wait?"

A: The models get better, yes — but they get better for everyone equally. If you and your competitor both use the same improved model, neither of you gains an edge. Your edge comes from what you build on top of the model — your data, your UX, your domain expertise. Waiting means your competitor accumulates that application-layer advantage while you stand still.

Q: "What's RAG? I keep hearing this term."

A: RAG stands for Retrieval-Augmented Generation. In plain English: instead of asking the AI to answer from its general training, you first retrieve relevant documents from your own private data, then feed those documents to the AI along with the question. The AI generates an answer grounded in your data. It's like giving a really smart intern your company's files before asking them a question, instead of just hoping they already know the answer.

Q: "Is it ever okay to build at Layer 2 — training your own model?"

A: Sometimes, yes — but only if you have a genuinely unique dataset that gives the model capabilities no foundation model can match, and you have the team and budget to maintain it long-term. For most companies, the math doesn't work. Foundation models improve faster than your custom model can, and they cost you nothing to improve.


⚡

RAG or Not?

25 XP
Which of these scenarios would benefit from RAG (feeding private data to the AI at query time)? 1. A law firm wants the AI to answer questions about their client contracts 2. A student wants the AI to explain how photosynthesis works 3. A hospital wants the AI to summarize a specific patient's medical history _Hint: RAG is most valuable when the AI needs information it was NOT trained on — your private, proprietary, or up-to-date data._


The Golden Rule: Make the Model a Config Variable

Here's the single most important architectural principle from this lesson, stated as simply as possible:

The AI model should be a setting you can change, not a foundation you build on.

Think of it like choosing a streaming service for your TV. Your TV (your application) works with Netflix, Hulu, Disney+, whatever. You can switch anytime. You didn't build a TV that only plays Netflix. That would be absurd.

Your AI application should work the same way. The model is the streaming service. Your application is the TV. Build a great TV.

When a better model launches next quarter (and one will), you change the config variable and move on. Your data, your UX, your customer relationships — the things that actually matter — stay exactly where they are.


⚡

Config Variable Check

25 XP
Your CTO shows you two code approaches for calling an AI model: **Approach A:** The code says `call_openai_gpt4("analyze this data")` — it directly names the specific model in every function call throughout the codebase. **Approach B:** The code says `call_model(config.ai_provider, "analyze this data")` — it reads which model to use from a settings file. 1. Which approach lets you switch models in a day vs. a month? 2. Which approach did Meridian Analytics use? Which did Crestline use? _Hint: Imagine OpenAI raises their prices 10x tomorrow. Which approach lets you switch to a competitor by lunchtime?_


Try it

⚡

Challenge

50 XP
Acme Health is a digital health startup. They built their first AI feature — a symptom checker — so that it only works with GPT-4's exact response format, including specific field names that OpenAI invented. When OpenAI changed those field names in a routine API update, the symptom checker broke and took 3 days to fix. 1. Which layer of the AI value stack did Acme invest in? (Infrastructure / Foundation Models / Developer Platforms / Application Layer) 2. Where should Acme have invested instead to avoid this fragility? 3. Acme is now considering two options for their next feature: - Option A: Hardcode calls to Claude 3.5 Sonnet by exact model ID — meaning the code will only ever call that one specific model - Option B: Abstract the model behind a config variable — meaning the code calls "whichever model is set in config" — and test against both Claude and GPT-4o Which option builds the moat they actually want? Why? _Hint: Start with question 1 — look at the value stack diagram and trace where Acme's actual code lives. The bug came from depending on a specific model's internal output format. Which layer does "depending on a specific model" put you in? And where should a startup's durable investment actually live?_

Back to Dana Park

Dana Park got the deprecation email and lost three months and $800,000 because Meridian Analytics had treated GPT-3 like a foundation rather than a swappable ingredient. Every function in their codebase named that specific model; every downstream system depended on its quirky output format. Crestline's CTO Marcus Webb changed one line in a config file because Crestline had built at Layer 4 — their competitive advantage lived in proprietary customer behavior data and domain expertise that no model swap could touch. Dana spent the months after the crisis rebuilding with one rule: the model is a config variable, not a foundation. Meridian's next product launched model-agnostic, and when a better model emerged six months later, they were running it in a week. The AI landscape will keep moving — foundation models will be deprecated, outperformed, and repriced. The only durable bet is building where the landscape can't pull the rug out from under you.

Key takeaways

  • Build the restaurant, not the farm. Your competitive advantage lives in Layer 4 — your data, UX, and domain expertise — not in the model itself. That's the layer competitors cannot replicate by swiping a credit card.
  • Make the model a config variable. Design your architecture so you can switch AI providers in days, not months. The model is the ingredient; your application is the recipe.
  • Every day of proprietary data deepens your moat. Each user interaction, each piece of domain knowledge baked into your application layer, compounds your advantage in ways that simply choosing a better model never will.

?

Knowledge Check

1.A vendor claims their AI model achieves 'human-level reasoning.' What is the most important follow-up question before letting that claim influence a procurement decision?

2.What is the most consequential difference between a large language model and an AI agent, from a governance perspective?

3.Your board asks whether to standardize on OpenAI via Azure, build on Meta's open-source Llama models, or use Anthropic's API directly. What strategic factor should most heavily frame that decision?

4.A competitor announces a major new AI capability. What is the most useful initial assessment question?

Next

AI Strategy and Competitive Positioning