How LLMs Actually Work
You use LLMs every day and have only a hand-wavy sense of what they are. Learn tokens, attention, training, and alignment clearly, with the math kept to what you actually need.
Transformers From Scratch
You can read a transformer paper and you cannot quite implement one. Build a small transformer end to end, so every line stops being a mystery and starts being a choice.
RLHF & Model Alignment
How does a next-token predictor become a useful assistant. Learn RLHF, DPO, and the alignment pipeline, plus the failure modes that explain weird model behavior in the wild.
Open-Source LLM Stack
You want to stop paying per token and own your stack. Learn to choose, deploy, and fine-tune Llama, Mistral, and Qwen, so open-source LLMs become a real option for your team.
Long Context & State
A million-token window sounds magic until you put real data in it. Learn the context engineering and memory patterns that make long context work, plus the trade-offs nobody tells you about.