The single most impactful variable in prompt engineering is not the wording of your request — it's whether you include examples. The difference between zero-shot and few-shot prompting can turn a mediocre AI response into a precise, perfectly-formatted output. This lesson explores both techniques in depth, with practical guidelines for when to use each.
Understanding Zero-Shot Prompting
In a zero-shot prompt, you describe the task with no examples. You rely entirely on the model's pre-trained knowledge to infer the format, style, conventions, and edge-case handling of your desired output.
Zero-shot prompting works because LLMs have been trained on billions of examples of instruction-following. When you write "Write a function that sorts an array of numbers in ascending order", the model has seen thousands of similar requests during training and can produce a reasonable response without any additional guidance.
💡 Note
Zero-shot example: "Write a TypeScript function called `validateEmail` that takes a string parameter and returns a boolean indicating whether the string is a valid email address. Use a regex pattern." — The model must decide everything: the exact regex, error handling approach, whether to export the function, etc.
When Zero-Shot Works Well
Zero-shot prompting is your best choice in these situations:
- The task is common and well-represented in training data — sorting, filtering, CRUD operations, standard UI components.
- You genuinely don't care about the specific format — any reasonable implementation will do.
- Speed matters more than precision — you want a quick draft to iterate on.
- The task is simple enough that there's only one reasonable way to do it.
- You're exploring or brainstorming — you want to see what the model comes up with naturally.
When Zero-Shot Fails
Zero-shot prompting breaks down when:
- You need a specific output format (particular JSON schema, table layout, etc.).
- Your coding conventions differ from the mainstream (unusual naming, specific patterns).
- The task involves domain-specific knowledge the model may not have.
- Consistency across multiple outputs is critical (e.g., generating 10 API endpoints that all follow the same pattern).
- The task is ambiguous — the model could interpret it in multiple valid ways.
Understanding Few-Shot Prompting
Few-shot prompting includes one or more input/output examples before the actual task. These examples serve as a template that the model will mimic. The model doesn't "learn" from these examples in a training sense — it pattern-matches against them during generation.
This is arguably the most powerful general-purpose prompting technique. It's reliable, predictable, and works across virtually every type of task. Think of it as showing someone what you want, rather than trying to describe it in words.
The Anatomy of a Few-Shot Prompt
A well-structured few-shot prompt has three parts:
- Task description: A brief explanation of what you want (optional but helpful).
- Examples: 1–5 input → output pairs that demonstrate the exact format and behavior you expect.
- The actual input: Your real request, formatted identically to the example inputs.
💡 Note
The examples should be realistic and representative. Toy examples (like processing a list with 2 items) may lead the model to produce toy-quality outputs. Use examples that reflect the actual complexity of your task.
One-Shot Prompting (1 example)
One-shot prompting uses a single example. It's the minimum viable few-shot approach. It works well when the task is relatively straightforward and you mainly need to establish the output format.
However, a single example can be ambiguous. The model doesn't always know which aspects of your example are intentional patterns and which are coincidental details. For example, if your one example uses camelCase, the model will likely follow suit — but it won't know whether that's a strict requirement or just how you happened to write that particular example.
Few-Shot Prompting (2–5 examples)
The sweet spot for most tasks is 2–3 examples. With multiple examples, the model can identify consistent patterns (these are intentional) and distinguish them from incidental details (these vary across examples).
Each additional example reinforces the pattern and reduces ambiguity. However, beyond 3–5 examples, you hit diminishing returns — and each example consumes precious context window tokens.
- 2 examples: Establishes a clear pattern. The model can distinguish intentional patterns from coincidences.
- 3 examples: The gold standard. Highly reliable for most tasks.
- 4–5 examples: Useful for complex or unusual formats where the model needs extra reinforcement.
- 5+ examples: Rarely needed. Consider whether you're over-engineering the prompt.
Example Selection Strategy
The examples you choose matter enormously. Follow these guidelines:
- Cover the common case: At least one example should represent the most typical input you'll encounter.
- Include an edge case: Show the model how to handle unusual inputs (empty values, special characters, etc.).
- Place the best example last: Due to recency bias, the model pays most attention to the final example.
- Keep examples consistent: If your examples use different styles or formats, the model gets confused.
- Match the complexity: If your real inputs are complex, use complex examples. Simple examples for complex tasks lead to oversimplified outputs.
Zero-Shot vs Few-Shot: Decision Framework
Use this decision tree when choosing your approach:
- Is the task common and well-defined? → Zero-shot first. If results are good, stop.
- Did zero-shot produce the wrong format? → Add 1 example (one-shot).
- Did one-shot produce inconsistent results? → Add 2 more examples (few-shot).
- Is the task highly unusual or domain-specific? → Start with 3 examples.
- Do you need machine-like consistency across many outputs? → Use 3–5 examples with strict formatting.
Advanced: Chain-of-Thought + Few-Shot
You can combine few-shot prompting with chain-of-thought reasoning by including examples that show step-by-step reasoning. Instead of just showing input → output, show input → reasoning → output. This is particularly powerful for debugging, code review, and complex problem-solving tasks.
💡 Note
This combination — few-shot examples with chain-of-thought reasoning — is consistently the highest-performing prompting strategy across academic benchmarks and real-world applications.
Practical Exercise
Try this exercise to internalize the difference:
- Take a task you do frequently (e.g., writing API endpoint handlers).
- First, prompt the AI with zero-shot: just describe what you want.
- Note what's wrong or inconsistent with the output.
- Now write 2–3 examples of your ideal output format.
- Re-prompt with your examples included.
- Compare the results. The difference is usually dramatic.
Key Takeaways
- Zero-shot relies on the model's training; few-shot provides explicit patterns to follow.
- 2–3 examples is the sweet spot for most tasks.
- Place your best, most representative example last.
- Use zero-shot first; escalate to few-shot only when needed.
- Few-shot + chain-of-thought is the most powerful general prompting strategy.