Open-Source LLM Stack
Start learningRun the open-source LLM stack
You want to stop paying per token and own your stack. Learn to choose, deploy, and fine-tune Llama, Mistral, and Qwen, so open-source LLMs become a real option for your team.
Overview
You want to stop paying per token and own your stack. Learn to choose, deploy, and fine-tune Llama, Mistral, and Qwen, so open-source LLMs become a real option for your team. Octo builds this course around your role, your experience, and what you already know, so the version you get isn't the same one a beginner across the hall is reading.
What you'll learn
By the end, you'll be able to do these, not just have read about them.
Choose between Llama, Mistral, Qwen, and friends for your use case
Deploy open models on your own hardware or cloud with predictable cost
Fine-tune efficiently with LoRA, QLoRA, and proper data curation
Compare open models against frontier models on the metrics you care about
Who this is for
You're an engineer or PM whose work now includes shipping AI features.
You're a curious operator who uses LLMs daily and wants the substance behind the surface.
You're an experienced ML or applied-AI practitioner adding a new specialty.
Prerequisites
Working familiarity with the basics of the topic, the kind of thing you'd pick up in a beginner course.
Some real-world reps, even if informal.
Suggested chapters
This is the typical chapter list. Your version is generated against your background and adapts as you go. It may compress, expand, or reorder these.
- 01
Foundations of Open-Source LLM Stack
The mental model and shared vocabulary you'll lean on for the rest of the course.
- 02
Core building blocks
The handful of moves that show up everywhere, drilled until they feel obvious.
- 03
Working through real examples
Applied patterns on examples close to the kind of work you actually do.
- 04
Edge cases & failure modes
Where the simple version breaks, and how to recognize it before it bites you.
- 05
Putting it together
Combining what you've learned into something end-to-end and defensible.
- 06
Capstone
A small project tied to your real work that proves you can use the material, not just recall it.
Real-world projects
- 01Apply open-source llm stack to a small problem from your actual work or studies.
- 02Produce one written or built artifact you can put on your resume, portfolio, or in a review packet.
- 03Run a self-graded capstone against an Octo-provided rubric.
Tools & concepts
Real tools and ideas covered. Octo brings them in when they fit your stack.
- LLM APIs
- Embeddings
- Vector databases
- Prompting patterns
- Evals
- Streaming
- Function calling
Where this leads
- 01
Applied AI / ML engineer roles
- 02
Stronger AI fluency in your current role
- 03
Foundation for advanced AI specialties
Common questions
Is this a fixed course, or is it built for me?
Built for you. The chapter list below is a typical outline. Your actual course is generated against your role, experience, and what you already know, then adapts as you go.
How long does it take?
Most learners finish in 2–6 weeks at a normal pace, depending on the topic. Octo compresses where you're strong and slows down where you're weak.
Is there a fixed schedule or cohort?
No. You start when you start. There's no live session, no calendar, no deadline.
Can I ask questions while I'm learning?
Yes, every module has an AI Sidekick in the margin. Ask for a different example, push back, or get a clarifying analogy without leaving the page.
What do I get at the end?
A verifiable, HMAC-signed certificate with a public verify page. It records the modules passed, scores, and capstone, not just attendance.
How much does it cost?
Octo is in research preview, courses are open. We'll be transparent before pricing changes.
More in LLMs & Foundation Models
- How LLMs Actually WorkYou use LLMs every day and have only a hand-wavy sense of what they are. Learn tokens, attention, training, and alignment clearly, with the math kept to what you actually need.View course
- Transformers From ScratchYou can read a transformer paper and you cannot quite implement one. Build a small transformer end to end, so every line stops being a mystery and starts being a choice.View course
- RLHF & Model AlignmentHow does a next-token predictor become a useful assistant. Learn RLHF, DPO, and the alignment pipeline, plus the failure modes that explain weird model behavior in the wild.View course
- Long Context & StateA million-token window sounds magic until you put real data in it. Learn the context engineering and memory patterns that make long context work, plus the trade-offs nobody tells you about.View course