Article20 mins

Common Prompting Mistakes

Even experienced developers consistently make prompting mistakes that lead to poor AI output. The frustrating part is that these mistakes are easy to fix once you know what to look for. This lesson catalogs the most common pitfalls, explains why they happen, and gives you concrete strategies to avoid each one.

Mistake 1: Being Too Vague

This is the number one prompting mistake. Prompts like "fix this code", "make it better", or "add error handling" give the model no clear direction about what specifically is broken, what "better" means, or what kind of errors to handle.

Vague prompts produce vague outputs because the model has to guess your intent. Often it guesses wrong, which leads to frustrating back-and-forth that wastes more time than writing a specific prompt would have taken.

  • Bad: "Fix this code." → Good: "This function throws a TypeError when the input array is empty. Add a guard clause that returns an empty array for null/undefined/empty inputs."
  • Bad: "Make it better." → Good: "Refactor this function to use early returns instead of nested if-else blocks. Keep the same public API and return types."
  • Bad: "Add error handling." → Good: "Wrap the API call in a try-catch block. On network errors, retry up to 3 times with exponential backoff. On 4xx errors, throw a descriptive error. Log all errors to console."

💡 Note

A good test: could someone else read your prompt and know exactly what you want? If not, the AI can't know either.

Mistake 2: Overloading a Single Prompt

Asking the model to "add authentication, set up the database, write tests, and deploy" in one prompt is like asking a junior developer to do four tasks simultaneously. The result is superficial coverage of each task rather than deep, correct implementation of any single one.

LLMs work best with focused, single-purpose prompts. Each prompt should have one clear objective. If you need multiple things done, chain prompts sequentially — the output of one becomes context for the next.

  • Instead of one mega-prompt, break it into steps:
  • Prompt 1: "Set up the database schema for a user authentication system with these fields: ..."
  • Prompt 2: "Now create the authentication API endpoints using the schema from above: signup, login, logout."
  • Prompt 3: "Write unit tests for the login endpoint covering: valid login, wrong password, and non-existent user."
  • Each step builds on the previous one with full context and focused attention.

Mistake 3: Ignoring the Context Window

Every model has a finite context window. When you paste an entire 5,000-line codebase, several things happen:

  • The model may silently truncate old context, losing your initial instructions.
  • Information in the middle of long contexts gets less attention ("lost in the middle" problem).
  • The model's response quality degrades because it's trying to process too much noise.
  • You waste tokens on irrelevant code, leaving less room for the model's response.

💡 Note

Golden rule: include only the files, functions, and type definitions the AI needs to see for the current task. A focused 200-line context consistently outperforms a noisy 5,000-line dump.

Mistake 4: Not Specifying the Output Format

If you don't tell the model what format you want, it will default to whatever format seems most natural — which often isn't what you need. This is especially problematic when you need structured data.

  • Want JSON? Say: "Return the result as a JSON object with this schema: { name: string, age: number }"
  • Want a specific code style? Say: "Use arrow functions, TypeScript strict mode, and named exports."
  • Want a list? Say: "Format your response as a numbered list with one item per line."
  • Want just code? Say: "Respond with only the code. No explanations, no markdown formatting."

Mistake 5: Not Providing Negative Constraints

Telling the model what NOT to do is often as important as telling it what to do. Without negative constraints, the model will make "helpful" changes you didn't ask for:

  • "Refactor this function" → The AI might rename variables, change the API, or switch libraries.
  • "Add type annotations" → The AI might also refactor the logic while adding types.
  • "Fix the bug on line 23" → The AI might rewrite the entire function.

Always include constraints like: "Do NOT change the function signatures", "Keep the existing variable names", "Only modify the error handling — leave everything else unchanged."

Mistake 6: Forgetting to Iterate

Your first prompt is rarely perfect, and that's okay. The mistake is treating the first output as final instead of iterating on it. Effective prompting is a conversation:

  • Attempt 1: Get a rough draft.
  • Attempt 2: "The function handles the happy path correctly, but it doesn't handle null inputs. Add null checks."
  • Attempt 3: "Good. Now add JSDoc comments to the function and all its parameters."
  • Attempt 4: "The null check should throw a TypeError, not return undefined. Fix that."

Each iteration adds specificity. By the 3rd or 4th message, the output is usually exactly what you want — and you've spent less time than writing it from scratch.

Mistake 7: Copy-Pasting Without Context

Pasting a block of code with just "fix this" or "what's wrong?" forces the model to guess what the code is supposed to do, what environment it runs in, and what error you're experiencing. Always provide:

  • What the code is supposed to do (the expected behavior).
  • What's actually happening (the actual behavior or error message).
  • Any relevant context (framework version, related files, environment).

Key Takeaways

  • Be specific — vague prompts produce vague outputs.
  • One task per prompt — break complex work into focused steps.
  • Manage your context — include only what's relevant.
  • Specify the format — don't leave it to chance.
  • Use negative constraints — tell the model what NOT to change.
  • Iterate — treat prompting as a conversation, not a one-shot.
Back to Course