The Field Guide to Avoiding GPT-5 MISTAKES
Yesterday, I dropped a new video on YouTube breaking down the new considerations you should be taking when using GPT-5. We talked about the changes OpenAI implemented with GPT-5 and I provide a high-level strategy for getting truly professional output from the model using the MISTAKES framework.
If you prefer to watch or want the full context, you can see the deep dive right here:
In the video, I mentioned that I created a detailed companion guide with specific, copy-and-paste prompts and checklists.
This is that guide.
For those who are ready to go from theory to practice, this is your actionable toolkit.
The GPT-5 "M.I.S.T.A.K.E.S." Field Guide
Below is a breakdown of a specific mistake and the specific prompt or fix you can use immediately.
As I explained in the video, don’t be afraid to ask the model to work harder if that is you what you need.
THINK HARD
M — Map
Mistake: Jumping in without a mission brief (goal, audience, constraints, success criteria).
The Fix: Give GPT-5 a full mission brief before you start.
Role: [expert]. Goal: [outcome]. Audience: [who]. Inputs: [links/notes]. Constraints: [style, length, compliance]. Success: [acceptance tests/metrics]. Output format: [sections/table/JSON]. Only proceed if unambiguous; otherwise ask 3 clarifying questions.
Mistake: Treating GPT-5 like a search box.
The Fix: Ask for a plan first, then the deliverable.
Before drafting, outline the approach, assumptions, and risks. Then wait for my ‘go’.
I — Investigate
Mistake: Blind trust in facts, dates, and numbers.
The Fix: Require verification paths for every claim.
Mark each claim High/Med/Low confidence; list what would change the answer; propose 2 ways to verify.
Mistake: Letting your own prompt bias anchor the answer.
The Fix: Ask for both the solution and rebuttal.
Generate the answer and a counter-answer that would be true if my premise is wrong; compare them.
S — Sketch
Mistake: One-shot mega-prompts that try to do everything at once.
The Fix: Scaffold your request. Go from brief → outline → sample → full draft. Lock in the structure early.
Propose 2 alternative outlines with tradeoffs; I’ll pick
Mistake: Over/under-steering the router.
Over: Forcing “think hard” on trivial tasks → slow + verbose.
Under: Skimming complex tasks.
The Fix:
Prompt tag for depth:
This is complex, reason step-by-step and show uncertainties.
Prompt tag for speed:
This is routine, answer concisely, no deliberation transcript, max 6 bullets.
T — Test
Mistake: Accepting the first good-looking output.
Fix: A/B by instruction delta.
Produce two distinct approaches with rationale and failure modes. Then recommend one.
Mistake: Accepting the first good-looking output without challenge.
The Fix: Red-team your own output to find the hidden flaws.
Stress-test this plan: adversarial inputs, worst-case constraints, ethical/compliance pitfalls. Output: issues → mitigations.
A — Adapt
Mistake: Using static prompts that don’t learn from previous outcomes.
The Fix: Close the feedback loop. Use GPT-5 to improve its own instructions.
Here is feedback from use (wins/gaps). Rewrite the prompt as v2; diff the changes and explain expected impact.
K — Kickstart
Mistake: Re-solving the same tasks from scratch every time.
The Fix: Build a personal library of your best prompts, schemas, and starter packs for each workflow.
BONUS TIP: For each workflow create a scheduled run in ChatGPT to automatically run every day so it’s already done when you are ready to look at it.
E — Evaluate
Mistake: Judging the output based on "vibes".
The Fix: Define your acceptance criteria first. Have "Done when" checks for every brief.
Before output, self-check against these 5 acceptance tests. If any fail, revise and note what changed.
S — Sustain
Mistake: Long, rambling chats that accumulate stale context and confuse the model.
The Fix: Start fresh sessions for each new project phase. Paste only the minimum brief and the most current artifacts to keep the context clean.
Your Drop-in Prompt Stubs
Here are some of the most powerful, reusable stubs from the guide that you can paste into your work today.
The Universal Mission Brief:
Role: [expert]
Goal: [clear outcome]
Audience: [who]
Inputs: [links/notes]
Constraints: [style/length/policy]
Acceptance tests: [1..5]
Output: [format/schema]
First: ask 3 clarifying questions if needed.
For Critical Self-Review:
Run a preflight: coverage, clarity, correctness, compliance. If any <4/5, revise and say what changed.
For Red-Teaming & Skepticism:
Critique this as a skeptic: edge cases, failure modes, bias. Propose mitigations with effort/impact.
Your 5-Minute Pre-Flight Check
Before you start any major task with GPT-5 that has to be right, run through this quick audit:
Do we have a mission brief with acceptance tests?
Is the format pinned (e.g., a schema, table, or specific section layout)?
Have we A/B tested at least one alternative approach?
Are there verification steps for critical claims explicit?
Did we document the final prompt and inputs for reuse?
I sincerely hope this guide not only helps you avoid some frustration with the new model so you get truly professional and impactful output from GPT-5.
Which of these fixes will you implement first to enhance your workflow? Hit reply and let me know…I read every email.
Best,
Daniel
P.S. I’m working on a new format that pairs with the broadcast content and comes out more frequently. Let me know if you like where this is going.
Conversely, please do also let me know if the previous mega-pack version I have been doing monthly is still valuable and if you prefer that.
Did I make a MISTAKE with the new approach? Let’s find out!
There might be room for both, but I want to hear from you either way! And if you can’t pick one, just hit “reply” and tell me directly.