O
Octo
CoursesPricing
O
Octo
CoursesPricingDashboardPrivacyTerms

© 2026 Octo

Mastering ChatGPT
1How to Use ChatGPT2Prompt Engineering Fundamentals
Module 2~15 min

Prompt Engineering Fundamentals

The art and science of writing prompts that get AI to do exactly what you want. Master the techniques used by professionals to get 10x better results from any AI tool.

Two marketers, one AI, completely different results

Maria needs a social media strategy for her company's new product launch. She opens ChatGPT and types:

💭You're Probably Wondering…

"Give me a social media strategy."

She gets a generic list of platitudes: "Post consistently," "Engage with your audience," "Use hashtags." Useless. She closes the tab and spends three hours writing the plan herself.

Her colleague Devon opens the same tool and types:

💭You're Probably Wondering…

"You are a senior social media strategist specializing in B2B SaaS product launches. I'm launching a project management tool aimed at remote teams of 10-50 people. Our budget is $5,000/month. Our audience is primarily on LinkedIn and Twitter. We have no existing following. Create a 30-day launch strategy with specific post types, posting frequency, and one viral-worthy content idea per week. Format as a table with columns: Week, Platform, Post Type, Topic, Goal."

Devon gets a detailed, actionable 30-day plan in 15 seconds. He tweaks two rows and ships it to his VP.

Same AI. Same task. The difference wasn't intelligence or experience -- it was prompt engineering.

🔑What is prompt engineering?
Prompt engineering is the skill of writing instructions that get AI to produce exactly the output you need. It's the difference between getting a C- first draft and an A- final product. And unlike most technical skills, you can learn the fundamentals in 15 minutes.

Zero-shot vs. few-shot prompting

These are the two most fundamental techniques in prompt engineering, and the names are simpler than they sound.

Zero-shot means you give the AI a task with no examples. You're trusting it to figure out the format, tone, and structure on its own.

💭You're Probably Wondering…

"Classify this customer review as positive, negative, or neutral: 'The delivery was late but the product quality was amazing.'"

Few-shot means you show the AI a few examples first, then give it the task. You're saying: "Here's the pattern -- now follow it."

💭You're Probably Wondering…

"Classify these customer reviews:

Review: 'Absolutely love it, best purchase ever!' → Positive Review: 'Broke after two days, total waste of money.' → Negative Review: 'It's okay, nothing special.' → Neutral

Now classify: 'The delivery was late but the product quality was amazing.' →"

✗ Without AI

  • ✗Faster to write
  • ✗AI guesses the format
  • ✗Works for simple tasks

✓ With AI

  • ✓Takes 30 more seconds
  • ✓AI follows YOUR format exactly
  • ✓Works for complex or nuanced tasks

The rule of thumb: If you care about the exact format, tone, or classification criteria of the output, use few-shot. If you just need a quick answer, zero-shot is fine.

💭You're Probably Wondering…

There Are No Dumb Questions

How many examples do I need for few-shot prompting?

Two to three is the sweet spot. One example shows the pattern. Two confirm it. Three lock it in. More than five usually wastes tokens without improving quality.

Does few-shot prompting work with every AI tool?

Yes. ChatGPT, Claude, Gemini, Copilot -- they all respond to examples. It's the most universal technique in prompt engineering because it works with how language models process patterns.

System prompts and role assignment

Every great prompt starts with telling the AI who to be. This is called role assignment, and it's the single highest-leverage technique in prompt engineering.

Why? Because the role shapes everything else. A financial analyst writes differently from a kindergarten teacher. A legal advisor focuses on different details than a marketing copywriter. When you assign a role, you're activating a specific subset of the AI's training data.

Role you assignHow the AI changes
"You are a senior data scientist"Uses technical language, focuses on methodology, cites statistical concepts
"You are a patient kindergarten teacher"Uses simple analogies, short sentences, encouraging tone
"You are a cynical startup founder"Direct, opinionated, cuts through fluff, focuses on business viability
"You are an executive coach"Asks probing questions, reframes problems, focuses on growth

Think of it like a costume. The AI puts on whatever role you give it, and that costume changes how it talks, what it emphasizes, and what it skips.

**Step 1: Assign a role** -- "You are a senior product manager at a tech company"
**Step 2: Set the context** -- "We're preparing for a board meeting next week"
**Step 3: Give the task** -- "Write a product update that highlights our top 3 wins this quarter"
**Step 4: Specify the format** -- "Use bullet points, include metrics, keep it under 200 words"
⚠️A common role assignment mistake
Don't just say "be an expert." That's too vague. Say what KIND of expert: "You are a tax accountant specializing in small business deductions in the United States." The more specific the role, the more targeted the output.

Chain-of-thought prompting

Here's a technique that dramatically improves accuracy on any task requiring reasoning: chain-of-thought prompting. Instead of asking for a final answer, you ask the AI to think through the problem step by step.

The difference is dramatic:

Without chain-of-thought:

💭You're Probably Wondering…

"A store has 45 apples. They sell 60% on Monday and half of what's left on Tuesday. How many remain?" AI answer: "9 apples" (sometimes wrong, sometimes right -- it's guessing)

With chain-of-thought:

💭You're Probably Wondering…

"A store has 45 apples. They sell 60% on Monday and half of what's left on Tuesday. How many remain? Think step by step." AI: "Step 1: 60% of 45 = 27 sold on Monday. Step 2: 45 - 27 = 18 remaining. Step 3: Half of 18 = 9 sold on Tuesday. Step 4: 18 - 9 = 9 remaining. Answer: 9 apples."

The answer might be the same, but the chain-of-thought version is reliably correct because the model works through each step instead of jumping to a conclusion. For complex reasoning, this is the single biggest accuracy improvement you can make.

When to use it:

  • Math and logic problems
  • Multi-step analysis ("Compare these 3 options and recommend one")
  • Debugging ("Here's my code -- find the bug")
  • Complex writing tasks ("Outline, then draft, then refine")

⚡

Zero-shot vs. Few-shot vs. Chain-of-thought

25 XP
For each task below, identify which technique you'd use and why: 1. Categorizing 50 support tickets into "Billing," "Technical," or "Feature Request" with consistent labels 2. Asking ChatGPT to explain what blockchain is 3. Figuring out which of three marketing campaigns had the best ROI given a table of data Write your answers and reasoning. Think about: Does the task need examples (few-shot)? Does it need step-by-step reasoning (chain-of-thought)? Or is it simple enough for a direct ask (zero-shot)? _Hint: For each task, ask two questions: Does the output format need to match a specific pattern? Does the task require step-by-step reasoning to get the right answer?_

Output format control

One of the most underused prompt engineering techniques: telling the AI exactly what shape the output should take.

Most people ask for information and let the AI decide the format. Pros specify it. Here's why this matters:

You say...You get...
"Tell me about our competitors"A 500-word essay you have to parse
"List our top 5 competitors in a table with columns: Name, Strength, Weakness, Threat Level (1-5)"A scannable table you can paste into a slide deck

You can request virtually any format:

  • JSON: "Return the data as a JSON object with keys: name, email, role"
  • Markdown table: "Format as a markdown table"
  • Bullet points: "Use bullet points, max 10 words per bullet"
  • Numbered steps: "Give me numbered steps I can follow"
  • Specific structure: "Use this structure: Problem, Cause, Solution, Timeline"

Pro tip for developers: When you need structured data, ask for JSON with a specific schema:

💭You're Probably Wondering…

"Extract the following from this customer email and return as JSON:

{"sentiment": "positive|negative|neutral", "urgency": "low|medium|high", "category": "billing|technical|general", "summary": "one sentence"}
```"

This turns ChatGPT into a data extraction pipeline you can plug directly into code.

💭You're Probably Wondering…

There Are No Dumb Questions

What about temperature and creativity settings?

Temperature controls how "creative" vs. "predictable" the AI's output is. Low temperature (0-0.3) gives you consistent, factual responses -- great for data extraction, classification, and code. High temperature (0.7-1.0) gives you more varied, creative responses -- great for brainstorming, creative writing, and generating diverse ideas. Most AI tools default to around 0.7. If you're using the API, you can set this directly. In ChatGPT's interface, you can't change temperature, but you can simulate it with instructions like "give me the most likely answer" (low temp) or "be creative and unconventional" (high temp).

Can I ask for multiple formats in one prompt?

Absolutely. "Give me a summary paragraph, then a bullet list of action items, then a table comparing options." The AI handles multi-format output well as long as you're explicit about where each section starts.

Prompt chaining: breaking big tasks into steps

Here's what separates casual users from prompt engineers: prompt chaining. Instead of cramming everything into one massive prompt, you break the task into a sequence of smaller prompts where each step feeds into the next.

Think of it like cooking a complex meal. You don't throw all the ingredients into one pot and hope. You prep ingredients, make the sauce, cook the protein, and assemble. Each step builds on the previous one.

Why chaining works better than one giant prompt:

  1. Better quality -- the AI focuses on one thing at a time
  2. More control -- you can course-correct between steps
  3. Less confusion -- complex prompts with 10 instructions often get partially ignored
  4. Reusable steps -- you can swap in different Step 1s and keep the rest of the chain
🔑The chain-of-prompts principle
If your prompt has more than 3 distinct instructions, consider breaking it into a chain. The AI handles "do one thing well" better than "do five things at once." Each link in the chain should have a single, clear objective.

Common prompt patterns (your copy-paste toolkit)

Here are the seven patterns that cover 90% of real-world prompt engineering use cases. Bookmark these.

1. Summarize

💭You're Probably Wondering…

"Summarize [this text] in [number] bullet points. Each bullet should be one sentence. Focus on [specific aspect]. The reader is [audience]."

2. Extract

💭You're Probably Wondering…

"From the following text, extract: [field 1], [field 2], [field 3]. Return as [format]. If a field isn't mentioned, write 'N/A'."

3. Classify

💭You're Probably Wondering…

"Classify the following [items] into these categories: [cat 1], [cat 2], [cat 3]. For each, provide the category and a confidence level (high/medium/low). Here are examples: [examples]."

4. Generate

💭You're Probably Wondering…

"Generate [number] [type of content] for [audience]. Tone: [tone]. Constraints: [constraints]. Here's an example of what I'm looking for: [example]."

5. Transform

💭You're Probably Wondering…

"Rewrite [this content] for [new audience/format/tone]. Keep the core message but change [specific aspects]. Length: [target length]."

6. Analyze

💭You're Probably Wondering…

"Analyze [this data/text/situation]. Identify: [what to look for]. Format your analysis as [format]. Prioritize [criteria]."

7. Compare

💭You're Probably Wondering…

"Compare [A] and [B] across these dimensions: [dim 1], [dim 2], [dim 3]. Format as a table. End with a recommendation for [use case]."

⚡

Build a prompt chain

50 XP
Pick a real task you need to do this week. Break it into a 3-step prompt chain where each step feeds into the next. For example, if your task is "write a project proposal": - **Step 1:** "List the 5 biggest risks of [project] and rate each from 1-5 on likelihood and impact" - **Step 2:** "Using these risks, write a risk mitigation section for a project proposal. Address the top 3 risks with specific countermeasures." - **Step 3:** "Now combine this risk section with the following project overview [paste overview] into a complete 1-page proposal. Format: Executive Summary, Scope, Risks, Timeline, Budget." Write your 3-step chain for YOUR task. Bonus: actually run it and compare the result to what you'd get from a single prompt.

Anti-patterns: what NOT to do

Knowing what to avoid is just as important as knowing what to do. Here are the most common prompt engineering mistakes.

Anti-pattern 1: The kitchen sink prompt

💭You're Probably Wondering…

"Write a blog post about AI in healthcare that's SEO-optimized and includes statistics and is written for executives but also accessible to general readers and should be 1000 words but could be longer if needed and include a call to action and make it engaging but professional and use examples from real companies but don't name any specific companies..."

This prompt tries to do everything and achieves nothing well. The AI will ignore half of your constraints. Fix: Break it into a chain or prioritize your top 3 constraints.

Anti-pattern 2: Being polite instead of precise

💭You're Probably Wondering…

"Could you maybe try to possibly write something along the lines of a marketing email, if it's not too much trouble? Something kind of professional but also friendly?"

Hedging language ("maybe," "kind of," "along the lines of") makes the AI hedge its output too. Fix: Be direct. "Write a marketing email. Tone: professional but warm."

Anti-pattern 3: Contradictory instructions

💭You're Probably Wondering…

"Write a short, detailed, comprehensive summary."

Short and comprehensive are opposites. The AI picks one and ignores the other. Fix: Decide what matters most. "Write a 100-word summary covering only the three main findings."

Anti-pattern 4: No constraints at all

💭You're Probably Wondering…

"Tell me about machine learning."

This produces a generic overview that could be an intro to a textbook. Without knowing WHO you are, WHY you're asking, and HOW MUCH detail you need, the AI defaults to "explain it like a Wikipedia article." Fix: Add audience, purpose, and length.

⚠️The biggest anti-pattern of all
Accepting the first output without iterating. Most people treat the AI's first response as the final answer. The difference between mediocre and excellent AI output is almost always 2-3 rounds of refinement: "Make it shorter," "Add an example to point 2," "The tone is too formal -- make it conversational."

Real-world prompt templates

Here are five battle-tested templates you can copy and customize right now.

Email writer:

💭You're Probably Wondering…

"You are a [role]. Write an email to [recipient + their role] about [topic]. Goal: [what you want them to do after reading]. Tone: [specify]. Length: under [X] words. Include: [specific elements]. Avoid: [things to skip]."

Meeting prep:

💭You're Probably Wondering…

"I have a meeting with [who] about [topic] in [timeframe]. My goal is to [outcome]. Prepare: (1) an agenda with 3-4 discussion points, (2) two questions I should ask them, (3) two objections they might raise and how to respond. Keep each section under 50 words."

Content creator:

💭You're Probably Wondering…

"Write a [content type] about [topic] for [audience]. The reader's main problem is [pain point]. Start with a hook that addresses this pain point. Include [number] specific examples. End with [CTA type]. Tone: [specify]. Length: [specify]."

Data analyzer:

💭You're Probably Wondering…

"Here is [data type]: [paste data]. Analyze it and tell me: (1) the top 3 insights, (2) any anomalies or red flags, (3) what I should do next based on this data. Present insights as bullet points with the metric, the finding, and why it matters."

Decision maker:

💭You're Probably Wondering…

"I need to decide between [option A] and [option B]. Context: [situation]. My priorities are: [list 3-4 priorities in order]. Compare both options across these priorities in a table. Then give me your recommendation with reasoning. Think step by step."

⚡

Spot the technique

25 XP
Read each prompt below and identify which prompt engineering technique(s) it uses. Choose from: zero-shot, few-shot, chain-of-thought, role assignment, output format control, prompt chaining. 1. "You are a data analyst. Here's a CSV of sales data. What's the trend?" → ___ 2. "Positive: 'Love this product!' / Negative: 'Terrible experience.' / Now classify: 'It works but shipping was slow.'" → ___ 3. "First, list the pros and cons. Then weigh each one on a scale of 1-10. Finally, give me a recommendation based on the total scores." → ___ 4. "Calculate the total cost. Show your reasoning step by step before giving the final number." → ___ _Hint: For each prompt, look for the signals: Does it define a persona? Does it show examples before the real task? Does it break work into sequential steps? Does it ask for reasoning before the final answer?_

Back to Maria and Devon

One week after Maria learned the fundamentals, her outputs started matching Devon's — not because she worked harder, but because her instructions got sharper. She stopped opening ChatGPT with a vague hope and started opening it with a role, a context, a task, and a format in mind. The 30-day social media strategy she once wrote by hand over three hours she now gets in a first draft in 15 seconds, then refines through two rounds of iteration. Devon hasn't gained any advantage — the gap closed the moment the prompts did. The tool didn't change. The instructions did.

Key takeaways

  • Zero-shot is quick and works for simple tasks. Few-shot (adding examples) is more reliable for anything requiring a specific format or classification.
  • Role assignment is the highest-leverage single technique. "You are a [specific expert]" changes everything about the output.
  • Chain-of-thought ("think step by step") dramatically improves reasoning accuracy by forcing the model to work through each step before reaching a conclusion.
  • Output format control (JSON, tables, bullet points) turns AI from a chatbot into a productivity tool.
  • Prompt chaining beats giant prompts. Break big tasks into 3-5 linked steps.
  • Anti-patterns to avoid: kitchen sink prompts, hedging language, contradictory instructions, and accepting first-draft output.

?

Knowledge Check

1.You need to classify 200 customer support tickets into exactly 4 categories with consistent labeling. Which prompting technique is most important to use?

2.What is the main benefit of chain-of-thought prompting?

3.Which of these is a prompt engineering anti-pattern?

4.You want the AI to produce consistent, predictable output for a data extraction task. Which combination of techniques should you use?

Previous

How to Use ChatGPT

Take the quiz →