Why this is hard to get right
The Problem with Pre-Mortems in Practice
Maya is a VP of Product at a 200-person B2B SaaS company. Her team is six weeks out from launching a self-serve trial tier — the biggest product bet of the year. Her CEO wants a risk review before the all-hands kick-off. She has two hours to pull something together.
She opens ChatGPT and types: "Write a pre-mortem for our self-serve launch."
The AI returns a polished but hollow document. It lists generic risks — "team misalignment," "unclear requirements," "technical debt" — with advice like "communicate clearly" and "set expectations early." Nothing she doesn't already know. Nothing tied to her actual initiative, her constraints, or her timeline. She spends 45 minutes rewriting it from scratch before giving up and improvising in the meeting.
This is the trap most leaders fall into with pre-mortems. The exercise itself is well-understood — Gary Klein formalized the technique in the 1990s as a way to surface failure modes before a project starts, rather than conducting a post-mortem after the damage is done. The research is solid: prospective hindsight increases the ability to identify reasons for future outcomes by up to 30%. But the practice breaks down at the execution layer, especially when you're running it under time pressure.
The core problem isn't that leaders don't know what a pre-mortem is. It's that translating the framework into a usable, audience-specific document requires synthesizing a lot of contextual detail — the initiative's goal, its success metrics, its staffing constraints, its security requirements, its decision-makers. When you hand an AI a vague prompt, it fills that void with generic content.
A well-structured prompt changes the output entirely. When Maya specifies the 90-day window, the two-engineer constraint, the SOC2 requirement, the target of cutting time-to-first-value from 14 to 5 days, and the four executives who will read the memo — the AI can finally do what it's actually good at: pattern-matching across failure modes that are relevant to this initiative, not a hypothetical one.
The result Maya gets with a precise prompt is a memo her CEO reads and annotates in the meeting. It surfaces three failure modes nobody had named. It assigns owners. It lists decisions needed that week.
The difference between a forgettable AI output and a memo that changes a meeting isn't the AI — it's the quality of the context you give it.
Common mistakes to avoid
Naming the Project Without Naming the Goal
Writing 'pre-mortem for Project Atlas' tells the AI almost nothing. The AI needs a measurable success definition — not a project name. Without it, failure modes are disconnected from real outcomes. Always include a specific metric: time-to-value, revenue target, churn reduction, or launch date. That anchor makes every risk relevant.
Omitting Hard Constraints
Skipping staffing limits, compliance requirements, or budget ceilings produces a memo full of mitigations you can't actually execute. The AI will recommend hiring more engineers or extending the timeline — neither of which is on the table. Name your real constraints so the mitigations stay within what's actually possible.
Forgetting to Define the Audience
A pre-mortem for an engineering team reads very differently from one for a CEO and VP Sales. Without audience context, the AI defaults to a generic executive summary tone that satisfies nobody. Specify your readers by title so the language, depth, and framing match how they think and what they need to decide.
Asking for Risks Without a Scoring Model
Listing risks without a probability-impact framework produces a flat list with no prioritization signal. Leaders can't act on an unranked list of 15 concerns. Request a 1–5 probability and impact score for each failure mode so your team knows which three risks to address this week and which five to monitor monthly.
Skipping the 'Decisions Needed' Section
A pre-mortem that only names risks is a worry list, not a leadership tool. The document's value comes from forcing decisions before the work starts. If you don't prompt the AI to include a 'decisions needed this week' section, you'll get analysis without action. That section is what makes the memo worth sharing in a live meeting.
Using a Generic Role Instead of a Specific Advisor Persona
Telling the AI 'act as a project manager' produces cautious, process-heavy output. Framing the role as a COO or strategy advisor — someone who speaks to business outcomes, not task lists — produces the direct, prioritized tone executives expect. The persona shapes both vocabulary and risk framing.
The transformation
Write a pre-mortem for our new strategy project and list some risks and how to avoid them.
You’re a COO and strategy advisor. Draft a **one-page pre-mortem memo** for our initiative: **launch self-serve onboarding for our B2B SaaS**. 1. Audience: **CEO, VP Product, VP Sales, Head of CS** 2. Time horizon: **90 days**, target launch **May 15** 3. Goal: reduce time-to-first-value from **14 days to 5 days** 4. Constraints: **2 engineers**, no pricing changes, SOC2 must stay intact Include sections: **Assumptions**, **Top 10 failure modes** (probability/impact 1–5), **Early warning signals**, **Mitigations with owners**, **Decisions needed this week**. Use a direct, calm tone.
Why this works
Persona Sets the Register
The After Prompt opens with 'You're a COO and strategy advisor' — not 'act as a project manager.' This single framing choice shifts the AI's vocabulary, prioritization logic, and tone toward executive communication. It produces fewer bullet-point checklists and more direct, decision-oriented language that lands in a leadership meeting.
Measurable Goal Anchors Every Risk
The phrase 'reduce time-to-first-value from 14 days to 5 days' gives the AI a concrete success definition. Every failure mode it surfaces now gets tested against that specific metric — which produces risks that are actually relevant, not risks that could apply to any software launch at any company.
Named Constraints Bound the Mitigations
The After Prompt specifies '2 engineers, no pricing changes, SOC2 must stay intact.' These limits prevent the AI from recommending solutions that aren't available — like expanding the team or simplifying compliance. The mitigations stay realistic and immediately actionable within your actual operating conditions.
Scoring Model Forces Prioritization
Requesting 'probability/impact 1–5' on each failure mode turns a flat risk list into a decision framework. Leaders can sort by combined score and immediately see which two or three risks demand mitigation this week. Without this, the AI produces an equal-weight list that's hard to act on under time pressure.
Section Structure Drives Action
The explicit section list — Assumptions, Top 10 failure modes, Early warning signals, Mitigations with owners, Decisions needed this week — tells the AI exactly what shape the output must take. It also ensures the most actionable section ('decisions needed') appears in every draft, making the memo a tool for alignment rather than just analysis.
The framework behind the prompt
The Research Behind Pre-Mortems
The pre-mortem technique was developed by cognitive psychologist Gary Klein, introduced in a 1989 study and popularized in his 1998 book Sources of Power. The core insight is that prospective hindsight — imagining a future failure as if it has already happened — dramatically improves a team's ability to identify specific causes.
A 1989 study by Deborah Mitchell, Jay Russo, and Nancy Pennington found that prospective hindsight increased the generation of accurate reasons for future outcomes by approximately 30% compared to standard foresight exercises. The effect works because it bypasses optimism bias: when you ask a team "what could go wrong," they self-censor. When you tell them "it failed — why?" they think differently.
Daniel Kahneman, in Thinking, Fast and Slow, endorsed the pre-mortem as one of the most practical tools for combating the planning fallacy — the universal tendency to underestimate time, costs, and risks while overestimating benefits. Organizations chronically produce best-case-scenario plans. A pre-mortem forces the team to briefly inhabit a worst-case-scenario world before committing resources.
In practice, the technique connects to several established frameworks:
- FMEA (Failure Mode and Effects Analysis): Originally developed for aerospace engineering, FMEA uses probability and severity scoring to prioritize failure modes. The 1–5 scoring model in the After Prompt mirrors FMEA logic without the formal process overhead.
- Red Team / Blue Team Analysis: Organizations like the US military and large consulting firms use adversarial scenario planning to find gaps in strategy. A pre-mortem is a condensed, single-team version of this approach.
- Pre-mortem vs. Risk Register: A risk register is ongoing and portfolio-wide. A pre-mortem is bounded, initiative-specific, and designed to force decisions at the start of work — making it faster to produce and easier to act on.
The gap between the theory and the execution is where most teams struggle. The pre-mortem is conceptually simple but cognitively demanding under deadline pressure — which is precisely where a well-structured AI prompt adds the most value.
Prompt variations
You are a seasoned venture operator and board advisor. Draft a one-page pre-mortem memo for this initiative: entering the mid-market segment in Germany, our first international expansion.
Context:
- Current customer base: 300 SMB customers in the US
- Team allocated: 1 sales lead, 1 localization contractor, no dedicated engineering
- Timeline: first paid customer in 6 months
- Constraints: GDPR compliance required, no new hires approved, budget capped at $80K
- Success metric: 3 closed mid-market deals at 15K EUR ARR by month 6
Include these sections:
- Core assumptions we're betting on
- Top 8 failure modes with probability and impact scores (1–5 each)
- Early warning signals we can track monthly
- Mitigations with a named owner for each
- Three decisions the founding team must make before month 1 begins
Tone: direct and candid. Write for a founding team of four, not an external audience.
You are a principal engineering lead and delivery risk advisor. Write a pre-mortem memo for this initiative: migrating our monolith to a services architecture for the payments module, targeting a production cutover in 12 weeks.
Context:
- Team: 3 backend engineers, 1 QA engineer, 0.5 DevOps
- Constraints: zero downtime required, PCI DSS compliance must be maintained, no feature freeze allowed during migration
- Success metric: payments latency under 200ms, zero data loss, zero customer-visible incidents at cutover
- Audience: VP Engineering and CTO
Sections to include:
- Technical assumptions underlying the migration plan
- Top 10 failure modes ranked by probability and blast radius (1–5 each)
- Observable signals that indicate the migration is drifting off track
- Mitigation actions with engineer-level owners
- Go/no-go criteria for the cutover decision
Use plain technical language. Avoid management abstractions. Write as if you'll present this in a 30-minute engineering review.
You are a VP of Customer Success with deep experience in SaaS retention. Draft a pre-mortem memo for this initiative: redesigning our onboarding flow to remove human-touch steps and move customers to a fully digital journey.
Context:
- Current model: 2 CSM-led calls per new customer in weeks 1 and 3
- New model: automated email sequence plus in-app guidance, no calls unless health score drops below threshold
- Affected customers: all new SMB accounts (roughly 40 per month)
- Success metric: 90-day retention stable at 88% or above; support ticket volume does not increase by more than 10%
- Constraint: CSM team is not being reduced; their capacity is being redirected to mid-market
- Audience: Chief Customer Officer and Head of Support
Sections:
- Assumptions about customer behavior we have not yet validated
- Top 8 failure modes with probability and impact scores
- Leading indicators we can track in the first 30 days
- Mitigation plan with owners across CS, Product, and Support
- Rollback criteria: conditions under which we revert to the human-touch model
Tone: honest and specific. Flag risks to retention directly — do not soften.
You are a B2B marketing strategist and GTM advisor. Write a pre-mortem memo for this initiative: launching a product-led growth (PLG) motion alongside our existing sales-led motion, targeting inbound sign-ups from individual contributors at mid-market companies.
Context:
- PLG launch date: 10 weeks from today
- Team: 1 demand gen manager, 1 content writer, 1 lifecycle marketing contractor
- No additional budget approved; existing $15K/month ad spend will be partially reallocated
- Success metric: 200 qualified PLG sign-ups in the first 60 days post-launch, with 15% converting to a paid plan
- Constraint: sales team must not see PLG as a competitive channel; their commission structure is unchanged
- Audience: CMO and VP Sales
Include:
- GTM assumptions we're treating as facts but haven't validated
- Top 8 failure modes with probability and impact scores (1–5 each)
- Weekly signals that indicate the motion is working or stalling
- Mitigations with owners across Marketing, Sales, and Product
- Alignment decisions that must be made before launch week
Be direct about the organizational risks — not just the channel risks.
When to use this prompt
Founders planning a major product bet
Run a pre-mortem before you commit headcount and timeline. Share the memo to align your exec team on safeguards.
Product managers preparing an executive review
Bring a structured risk view to your go/no-go meeting. Use the “decisions needed” section to close gaps fast.
Customer success leaders scaling onboarding changes
Identify failure modes that hit retention, support load, or adoption. Assign owners for mitigations across teams.
Engineering managers managing delivery risk
Map early warning signals and mitigation actions tied to staffing limits. Reduce surprises during execution.
Marketing teams launching a new go-to-market motion
Pressure-test assumptions about channels, messaging, and handoffs. Turn risks into concrete checks and owners.
Pro tips
- 1
Define the single success metric so every risk ties to the same outcome.
- 2
Name your hardest constraint because it forces realistic mitigations.
- 3
List 3 early warning signals you can measure weekly to catch issues sooner.
- 4
Specify who can approve trade-offs so the memo drives decisions, not debate.
One of the most effective ways to sharpen a pre-mortem output is to seed the AI with two or three risks you already know about and explicitly ask it to find the ones you haven't considered.
Add this instruction to your prompt:
We already know about these risks: (1) the integration with our billing system may delay launch by two weeks, (2) our sales team may resist the self-serve motion. Do not include these in your output. Find the failure modes we haven't named yet.
This exclusion technique does two things:
- It prevents the AI from listing obvious risks that your team already has mitigations for
- It forces the model to work harder to find second-order and organizational failure modes — the ones teams typically miss under deadline pressure
You can also use this technique in reverse: ask the AI to stress-test a mitigation you've already proposed. For example: 'We plan to mitigate the SOC2 risk by involving our security lead in every sprint review. What could still go wrong with this mitigation?' That follow-up prompt often surfaces the most valuable insight of the entire exercise.
The pre-mortem structure is robust across industries, but the framing, language, and failure mode categories shift meaningfully depending on your sector.
Financial services and fintech: Lead with regulatory and compliance failure modes. Your constraints section must include specific regulatory bodies (e.g., OCC, FCA, FINRA) and any audit or reporting obligations tied to the timeline. Replace 'probability/impact' with 'likelihood/severity' to match the language your risk and compliance teams use.
Healthcare and health tech: HIPAA and patient safety considerations belong in the constraints section, not as footnotes. The early warning signals section should include clinical indicators alongside operational ones. Frame the audience as clinicians and compliance officers, not just executives.
Consumer marketplaces: Add a 'supply-demand imbalance' failure mode category that doesn't appear in B2B SaaS pre-mortems. Your success metric should specify both sides of the marketplace. Mitigations need owners on both the supply and demand sides of your organization.
Internal operations and change management: The biggest failure modes are almost always organizational — resistance from middle management, unclear ownership, conflicting incentives. Ask the AI explicitly to weight organizational failure modes as highly as operational ones. This shifts the output from a project plan to a change management document.
A pre-mortem memo generated with a strong prompt does its best work when you treat it as a facilitation artifact, not a final document.
Here's a 30-minute meeting structure that works well:
Minutes 0–5: Distribute the memo. Share it 24 hours in advance if possible. Ask each attendee to mark their top two failure modes before the meeting.
Minutes 5–15: Challenge the failure mode list. Go through the top five ranked items. Ask the room: 'Is this ranked correctly? Is there a failure mode missing that's more dangerous than anything on this list?' The AI-generated list acts as a forcing function — people react faster to a draft than to a blank page.
Minutes 15–25: Assign owners. For every mitigation, someone must put their name on it in the room. The AI can suggest owners by function, but the room confirms them. Unowned mitigations do not survive the meeting.
Minutes 25–30: Close the 'decisions needed' section. Each decision gets a deadline and a decision-maker. If a decision can't be made in the meeting, it gets an owner and a due date before the next milestone.
This structure turns a document into a decision — which is the original purpose of the pre-mortem method.
When not to use this prompt
Don't use a pre-mortem memo prompt when the initiative lacks a defined goal or timeline. If you can't state a measurable success metric and a target date, you don't have enough definition to run a meaningful pre-mortem. The AI will produce generic risks because there's no specific outcome to fail against. Define the initiative more clearly first.
Avoid this format for low-stakes work. A pre-mortem is calibrated for high-stakes, high-uncertainty initiatives — major product launches, new market entries, significant organizational changes. Using it for a minor content campaign or a routine process update wastes meeting time and dilutes the seriousness of the tool.
Don't rely on a solo AI-generated memo as your only risk process. The memo is a starting point for a team conversation, not a replacement for one. If your team never reviews, challenges, or assigns ownership to the output, it becomes a document in a folder — not a leadership tool.
Skip the pre-mortem format if your organization requires a formal risk management methodology like full FMEA, ISO 31000, or enterprise risk management (ERM) documentation. In regulated industries, a narrative memo may not satisfy audit or compliance requirements. Use the appropriate formal process and treat the pre-mortem as a preparation exercise, not the deliverable itself.
Troubleshooting
The AI lists 10 failure modes but they all feel equally weighted and generic
Add two specificity constraints to your prompt. First, instruct the AI: 'Every failure mode must reference a specific constraint, team, or metric named in this prompt.' Second, require scoring: 'Rank failure modes by combined probability x impact score and bold the top three.' These two instructions force differentiation and eliminate the flat-list problem.
The mitigations section reads as advice, not assigned actions
Reframe the mitigations instruction to require owner-action pairs. Change 'mitigations' to: 'For each failure mode in your top five, write one mitigation in this format: [Action verb] + [specific action] + [owner by role] + [deadline or trigger].' That format forces the AI to write 'Head of CS reviews onboarding drop-off data every Friday starting week 2' instead of 'ensure regular check-ins.'
The output is too long and won't fit on one page when formatted
Set explicit length constraints at the section level. Instead of 'one-page memo,' specify: 'Maximum 3 sentences per failure mode. Maximum 2 sentences per mitigation. Assumptions section capped at 5 bullet points.' Section-level word limits are more reliable than page-level limits because AI models don't measure pages — they measure tokens.
The early warning signals are lagging indicators, not leading ones
Add this instruction to your prompt: 'Every early warning signal must be measurable in the first two weeks of execution, not at the end of a milestone. Prefer behavioral signals (e.g., activation rate in week 1) over outcome signals (e.g., 90-day retention).' This forces the AI to distinguish between signals you can act on early versus confirmations of failure you'll see too late.
The AI ignores organizational or political failure modes and focuses only on technical ones
Explicitly request organizational failure modes as a category. Add: 'Include at least three failure modes in the category of organizational risk — misaligned incentives, unclear ownership, or executive-level misalignment — separate from operational or technical risks.' Most AI models default to technical risk framing unless you name organizational risk as a required category.
How to measure success
How to Evaluate the Output
A strong AI-generated pre-mortem memo passes these checks:
Specificity test
- Every failure mode references something specific to your initiative — a named constraint, metric, team, or date
- No failure mode could apply unchanged to a different company's project
Prioritization test
- Failure modes carry distinct scores — not every risk is rated 3/3
- The top three risks are identifiably more severe than the bottom three
Actionability test
- Each mitigation names a specific action (a verb + an output) and a function or role responsible
- The 'decisions needed' section contains items that are genuinely open, not already resolved
Leading indicator test
- Early warning signals are measurable within the first two weeks of execution
- At least one signal is a behavioral metric (activation rate, usage frequency) rather than a lagging outcome
Audience fit test
- The language and depth match the seniority of the named audience
- The tone is direct — no hedging phrases like "it may be worth considering"
If the memo fails any of these checks, go back to the prompt and add the missing constraint, scoring instruction, or specificity requirement. The output quality maps almost directly to the input precision.
Now try it on something of your own
Reading about the framework is one thing. Watching it sharpen your own prompt is another — takes 90 seconds, no signup.
Turn your initiative details into a one-page pre-mortem memo your leadership team will actually act on.
Try one of these
Frequently asked questions
One page is the target. Leadership meetings move fast, and a two-page memo usually gets skimmed or skipped. Structure matters more than length — a one-page document with clearly labeled sections (failure modes, mitigations, decisions needed) gets read and acted on. If your initiative is very complex, a two-page limit is acceptable, but anything beyond that signals you need to simplify the initiative itself.
Yes — and it's often more valuable mid-project than at kickoff. Adjust the framing from 'before we start' to 'at the 30-day mark, what could still cause this to fail.' Specify what has already happened, what assumptions have already been tested, and what constraints have changed since launch. The AI will surface forward-looking risks rather than rehashing what you already know.
Replace the '1–5 probability/impact' instruction with your organization's model. For example:
- RAG status (Red / Amber / Green)
- FMEA scoring (Severity x Occurrence x Detectability)
- Likelihood x Consequence matrix (used in enterprise risk management)
Just name the framework in the prompt and the AI will apply it. You don't need to explain the scoring method — standard frameworks are well-known.
Add a specificity instruction to your prompt. Try: 'Do not include generic risks that apply to any software project. Every failure mode must be specific to this initiative's constraints and timeline.' You can also prime the AI by listing two or three risks you already know about and instructing it to find ones you haven't considered. That exclusion signal dramatically sharpens the output.
Both approaches work, but sequence matters. Using AI first gives you a structured draft that your team can react to — which is often faster and more productive than building from a blank page. Run the AI-generated memo as a starting point, then facilitate a 30-minute team session to challenge the failure modes and assign real owners. The AI does the structural work; your team adds the organizational truth.
Review it at every major milestone, at minimum. If your initiative runs 90 days, that means roughly every three to four weeks. Use the 'early warning signals' section as your checklist. If a signal fires, rerun the prompt with updated constraints and status so the mitigations stay current. A pre-mortem that's never revisited becomes a historical document, not a decision tool.
Yes — the structure transfers well to any high-stakes initiative with a defined outcome and timeline. For a hiring plan, your 'constraints' might be headcount budget and a target date. For a cultural change, your success metric might be a target eNPS score or retention rate. The key is keeping the prompt concrete: a specific goal, a specific audience, and specific constraints produce specific output regardless of the domain.
A risk register is a living inventory — it tracks risks across a portfolio over time. A pre-mortem memo is a time-boxed, initiative-specific document designed to force decisions before work begins. The pre-mortem asks 'imagine we failed — why?' The risk register asks 'what risks exist right now?' Use a pre-mortem at kickoff to surface assumptions, and a risk register to track the ones you didn't catch.