The AI Landscape — What Today's AI Can and Can't Do
Understand where your company sits in the AI value chain and which layer is actually defensible.
The $2 Million Mistake
Dana Park, Meridian Analytics, and Crestline are fictional composites illustrating real API deprecation risks that have affected companies across the industry.
In January 2024, a CEO named Dana Park got an email that made her stomach drop.
OpenAI was killing GPT-3 (specifically the legacy GPT-3 base Completions API models). Done. Gone. Her entire product — the thing her company Meridian Analytics had spent two years building — was wired directly into that model like Christmas lights plugged into a single outlet. When OpenAI pulled the plug, the lights went dark.
Her engineering team spent the next three months — and roughly $800,000 in salary — just keeping the product alive. Not improving it. Not shipping new features. Just surviving.
Meanwhile, across town, a competitor called Crestline got the same email. Their CTO, Marcus Webb, read it, shrugged, changed one line in a config file, and switched to a different AI model in a week. Their product actually got better. Customer satisfaction jumped 12 points.
Same industry. Same disruption. Wildly different outcomes.
What did Crestline know that Meridian didn't? They understood where in the AI stack to place their bets. And that's exactly what you're about to learn.
The Restaurant Supply Chain (Your New Mental Model)
Imagine you're opening a restaurant. You need:
- A farm that grows the ingredients (lettuce, tomatoes, beef)
- A food distributor that packages and delivers those ingredients
- Kitchen tools — ovens, blenders, pans
- Your recipes, your brand, your relationship with diners
Now here's the thing: any restaurant can buy tomatoes from the same farm. Any restaurant can order the same oven from the same catalog. Those things are commodities — everyone has access to the same stuff, so they don't make you special.
What makes your restaurant special? Your secret sauce. Your grandmother's recipe. The fact that your regulars walk in and you already know their order. The experience you create. That's what people pay for, and that's what a competitor cannot just go buy.
The AI world works exactly the same way.
Let's meet each layer as if it were a character in a story.
Attention Is All You Need — Vaswani et al. (Google Brain/Google Research, 2017). The foundation of every modern LLM.
175B parameters. Step-change in LLM scale — substantially more reliable long-form text generation. Opened the API era for AI product development.
100M users in ~2 months — among the fastest consumer product adoptions on record at the time.
GPT-4, Claude, Gemini, LLaMA — every major tech company commits. $90B+ invested (widely estimated, various analyst sources).
Models that see images, write code, and reason. AI enters the enterprise mainstream.
AI systems that take multi-step actions autonomously. The next wave of business transformation.
Meet the Four Layers
Layer 1: The Farm (Infrastructure)
Who lives here: NVIDIA (makes the specialized chips), and the big cloud providers — AWS, Azure, Google Cloud.
What it does: Provides the raw computing power that AI needs to think. Thousands of specialized chips crunching numbers at incredible speed. Without this layer, nothing else exists.
Personality: Reliable, unglamorous, increasingly cheap. Think of it as electricity — absolutely essential, but you don't compete on who has the best power grid. You just plug in.
Can you build a moat here? Almost certainly not, unless you're literally NVIDIA. These services are becoming commodities. You can rent them from multiple providers, and prices keep falling.
Layer 2: The Ingredients (Foundation Models)
Who lives here: Anthropic (Claude), OpenAI (GPT), Google (Gemini), Meta (Llama).
What it does: These are the "brains" — massive AI models trained on enormous amounts of text and data. They can write, reason, analyze, and create. They're the ingredients your application uses.
Personality: Flashy, fast-improving, and constantly being replaced by something better. Today's best model is next quarter's second-best model.
Can you build a moat here? Dangerous territory. If you hitch your wagon to one specific model, you're betting that this model stays the best forever. (Spoiler: it won't.)
Layer 3: The Kitchen Tools (Developer Platforms)
Who lives here: LangChain, Vercel AI SDK, HuggingFace — tools that make it easier to connect your application to AI models.
What it does: Provides the plumbing. Makes it easier to send data to a model, get results back, and do useful things with those results.
Personality: Helpful, ever-changing, somewhat interchangeable. Like kitchen appliances — you want good ones, but you can swap a blender brand without reinventing your recipes.
Can you build a moat here? Not really. These tools are open-source or widely available. Your competitor uses the same ones.
Layer 4: The Restaurant (Your Application Layer)
Who lives here: You. Your prompts, your proprietary data, your user experience, your domain expertise, your customer relationships.
What it does: This is where the magic happens — where raw AI capabilities get shaped into something your customers specifically love. This includes:
- RAG (Retrieval-Augmented Generation) — feeding your private data into the AI at query time so it gives answers specific to your business
- Your UX — how the AI experience actually feels to your users
- Your proprietary data — years of customer behavior, industry knowledge, and domain expertise that no one else has
Personality: Unique, hard to copy, gets stronger over time. Like a restaurant's reputation — every happy customer makes the next one more likely.
Can you build a moat here? YES. This is the only layer where your investment compounds. A competitor can switch to the same AI model you use overnight. They cannot replicate your data, your user relationships, or your domain expertise overnight.
Layer Spotter
25 XPThe Comparison Table: Where Should You Invest?
| Layer 1: Infrastructure | Layer 2: Foundation Models | Layer 3: Developer Platforms | Layer 4: Application | |
|---|---|---|---|---|
| Restaurant analogy | The farm | The ingredients | The kitchen tools | Your recipes + brand |
| Examples | NVIDIA, AWS, Azure | Claude, GPT, Gemini | LangChain, HuggingFace | Your data, UX, prompts |
| Defensibility | Very low | Low | Low | High |
| Commoditizing? | Fast | Fast | Moderate | Slow (compounds) |
| Switching cost | Low | Low (if designed right) | Low | High (your advantage) |
| Your strategy | Rent it | Use it, don't marry it | Pick good tools | Invest heavily here |
Note on the 92% figure: OpenAI claimed in 2024 that 92% of Fortune 500 companies use its products. This figure is self-reported by OpenAI and has not been independently verified.
Three Companies, Three Strategies, One Winner
Let's return to our three companies — but this time, let's see the full picture.
Meridian Analytics: "We Built on Quicksand"
Dana Park's team at Meridian built their product like a restaurant that only knows how to cook with one specific brand of tomato. Their code was tightly coupled to GPT-3 — meaning their software depended on that exact model's behavior, its specific output format, even the quirky way it named data fields.
When OpenAI deprecated GPT-3 in early 2024? Their entire kitchen collapsed. Three months of rebuilding. $800K burned. Six features behind their roadmap by Q2.
Their mistake: They invested in Layer 2 instead of Layer 4. They built on top of the ingredients instead of building great recipes.
Volta Intelligence: "We Grew Our Own Tomatoes"
(Volta Intelligence is a fictional illustrative example.)
Volta went the opposite direction. They said, "We'll grow our own tomatoes!" — meaning they trained their own AI model from scratch on internal data. The initial investment: $2 million (illustrative). Ongoing cost: $400K per quarter in compute and ML engineer salaries.
Then GPT-4o launched in May 2024 and outperformed Volta's custom model on every benchmark their customers actually cared about. Suddenly Volta was the restaurant growing expensive organic tomatoes while the grocery store next door started selling ones that tasted better for a fraction of the price.
Their mistake: They invested in Layer 2 (trying to own the ingredients) when they should have invested in Layer 4 (the recipes and dining experience).
Crestline: "We Perfected the Recipes"
Crestline was the restaurant that said: "We don't care which farm our tomatoes come from. We care about our recipes, our regulars, and our reputation."
They built on top of the API (a standardized connection to the AI model — think of it as a universal power adapter) with a proprietary data layer — five years of user behavior signals and deep domain knowledge baked into their retrieval and ranking systems.
When Claude 3.5 Sonnet outperformed GPT-4 on their benchmark tasks, CTO Marcus Webb swapped models in one week. The product got better. NPS jumped 12 points (illustrative). Their competitive advantages were untouched — because those advantages lived in Layer 4, not Layer 2.
Their lesson for you: Invest in the application layer. Make the model a swappable part. Build your moat from what competitors cannot buy off the shelf.
Diagnose the Strategy
25 XPThere Are No Dumb Questions
Q: "If foundation models keep getting better for free, why invest in AI at all? Can't we just wait?"
A: The models get better, yes — but they get better for everyone equally. If you and your competitor both use the same improved model, neither of you gains an edge. Your edge comes from what you build on top of the model — your data, your UX, your domain expertise. Waiting means your competitor accumulates that application-layer advantage while you stand still.
Q: "What's RAG? I keep hearing this term."
A: RAG stands for Retrieval-Augmented Generation. In plain English: instead of asking the AI to answer from its general training, you first retrieve relevant documents from your own private data, then feed those documents to the AI along with the question. The AI generates an answer grounded in your data. It's like giving a really smart intern your company's files before asking them a question, instead of just hoping they already know the answer.
Q: "Is it ever okay to build at Layer 2 — training your own model?"
A: Sometimes, yes — but only if you have a genuinely unique dataset that gives the model capabilities no foundation model can match, and you have the team and budget to maintain it long-term. For most companies, the math doesn't work. Foundation models improve faster than your custom model can, and they cost you nothing to improve.
RAG or Not?
25 XPThe Golden Rule: Make the Model a Config Variable
Here's the single most important architectural principle from this lesson, stated as simply as possible:
The AI model should be a setting you can change, not a foundation you build on.
Think of it like choosing a streaming service for your TV. Your TV (your application) works with Netflix, Hulu, Disney+, whatever. You can switch anytime. You didn't build a TV that only plays Netflix. That would be absurd.
Your AI application should work the same way. The model is the streaming service. Your application is the TV. Build a great TV.
When a better model launches next quarter (and one will), you change the config variable and move on. Your data, your UX, your customer relationships — the things that actually matter — stay exactly where they are.
Config Variable Check
25 XPTry it
Challenge
50 XPBack to Dana Park
Dana Park got the deprecation email and lost three months and $800,000 because Meridian Analytics had treated GPT-3 like a foundation rather than a swappable ingredient. Every function in their codebase named that specific model; every downstream system depended on its quirky output format. Crestline's CTO Marcus Webb changed one line in a config file because Crestline had built at Layer 4 — their competitive advantage lived in proprietary customer behavior data and domain expertise that no model swap could touch. Dana spent the months after the crisis rebuilding with one rule: the model is a config variable, not a foundation. Meridian's next product launched model-agnostic, and when a better model emerged six months later, they were running it in a week. The AI landscape will keep moving — foundation models will be deprecated, outperformed, and repriced. The only durable bet is building where the landscape can't pull the rug out from under you.
Key takeaways
- Build the restaurant, not the farm. Your competitive advantage lives in Layer 4 — your data, UX, and domain expertise — not in the model itself. That's the layer competitors cannot replicate by swiping a credit card.
- Make the model a config variable. Design your architecture so you can switch AI providers in days, not months. The model is the ingredient; your application is the recipe.
- Every day of proprietary data deepens your moat. Each user interaction, each piece of domain knowledge baked into your application layer, compounds your advantage in ways that simply choosing a better model never will.
Knowledge Check
1.A vendor claims their AI model achieves 'human-level reasoning.' What is the most important follow-up question before letting that claim influence a procurement decision?
2.What is the most consequential difference between a large language model and an AI agent, from a governance perspective?
3.Your board asks whether to standardize on OpenAI via Azure, build on Meta's open-source Llama models, or use Anthropic's API directly. What strategic factor should most heavily frame that decision?
4.A competitor announces a major new AI capability. What is the most useful initial assessment question?