LLM+

Honest Postmortem Analyzer

/em -postmortem — Honest Analysis of What Went Wrong

Installation

  1. Make sure Claude is on your device and in your terminal.

    Skills load from ~/.claude/skills/ when Claude Code starts up — so you need it on your machine first. If you don't have it yet, install it once with the command below, then run claude in any terminal to verify.

    One-time setup
    npm i -g @anthropic-ai/claude-code

    Already have it? Skip ahead.

  2. Paste into Claude Code or into your terminal.
    Install
    git clone https://github.com/alirezarezvani/claude-skills.git /tmp/alirezarezvani__claude-skills && mkdir -p ~/.claude/skills/postmortem-alirezarezvani && cp -r /tmp/alirezarezvani__claude-skills/c-level-advisor/executive-mentor/skills/postmortem/. ~/.claude/skills/postmortem-alirezarezvani/

    This copies the whole skill folder into ~/.claude/skills/postmortem-alirezarezvani/ — the SKILL.md plus any scripts, reference docs, or templates the skill ships with. Safe default: works for every skill.

    Faster alternative (instruction-only skills)

    Skips the clone and grabs only the SKILL.md file. Don't use this if the skill ships Python scripts, reference markdowns, or asset templates — they won't be downloaded and the skill will fail when it tries to load them.

    Quick install (SKILL.md only)
    mkdir -p ~/.claude/skills/postmortem-alirezarezvani && curl -fsSL https://raw.githubusercontent.com/alirezarezvani/claude-skills/main/c-level-advisor/executive-mentor/skills/postmortem/SKILL.md -o ~/.claude/skills/postmortem-alirezarezvani/SKILL.md
  3. Restart Claude Code.

    Quit and reopen Claude Code (or any other agent that loads from ~/.claude/skills/). New skills are picked up on startup.

  4. Just ask Claude.

    Skills auto-activate when your request matches the skill's description — no slash command needed. Trigger phrases live in the skill's own frontmatter; you can read them in the “What this skill does” section above.

Prefer to read the source first? Open on GitHub.

When Claude uses it

/em -postmortem — Honest Analysis of What Went Wrong

What this skill does

/em:postmortem — Honest Analysis of What Went Wrong

Command: /em:postmortem <event>

Not blame. Understanding. The failed deal, the missed quarter, the feature that flopped, the hire that didn't work out. What actually happened, why, and what changes as a result.


Why Most Post-Mortems Fail

They become one of two things:

The blame session — someone gets scapegoated, defensive walls go up, actual causes don't get examined, and the same problem happens again in a different form.

The whitewash — "We learned a lot, we're going to do better, here are 12 vague action items." Nothing changes. Same problem, different quarter.

A real post-mortem is neither. It's a rigorous investigation into a system failure. Not "whose fault was it" but "what conditions made this outcome predictable in hindsight?"

The purpose: extract the maximum learning value from a failure so you can prevent recurrence and improve the system.


The Framework

Step 1: Define the Event Precisely

Before analysis: describe exactly what happened.

  • What was the expected outcome?
  • What was the actual outcome?
  • When was the gap first visible?
  • What was the impact (financial, operational, reputational)?

Precision matters. "We missed Q3 revenue" is not precise enough. "We closed $420K in new ARR vs $680K target — a $260K miss driven primarily by three deals that slipped to Q4 and one deal that was lost to a competitor" is precise.

Step 2: The 5 Whys — Done Properly

The goal: get from what happened (the symptom) to why it happened (the root cause).

Standard bad 5 Whys:

  • Why did we miss revenue? Because deals slipped.
  • Why did deals slip? Because the sales cycle was longer than expected.
  • Why? Because the customer buying process is complex.
  • Why? Because we're selling to enterprise.
  • Why? That's just how enterprise sales works.

→ Conclusion: Nothing to do. It's just enterprise.

Real 5 Whys:

  • Why did we miss revenue? Three deals slipped out of quarter.
  • Why did those deals slip? None of them had identified a champion with budget authority.
  • Why did we progress deals without a champion? Our qualification criteria didn't require it.
  • Why didn't our qualification criteria require it? When we built the criteria 8 months ago, we were in SMB, not enterprise.
  • Why haven't we updated qualification criteria as ICP shifted? No owner, no process for criteria review.

→ Root cause: Qualification criteria outdated, no owner, no review process. → Fix: Update criteria, assign owner, add quarterly review.

The test for a good root cause: Could you prevent recurrence with a specific, concrete change? If yes, you've found something real.

Step 3: Distinguish Contributing Factors from Root Cause

Most events have multiple contributing factors. Not all are root causes.

Contributing factor: Made it worse, but isn't the core reason. If removed, the outcome might have been different — but the same class of problem would recur.

Root cause: The fundamental condition that made the outcome probable. Fix this, and this class of problem doesn't recur.

Example — failed hire:

  • Contributing factors: rushed process, reference checks skipped, team under pressure to staff up
  • Root cause: No defined competency framework, so interview process varied by who happened to conduct interviews

The distinction matters. If you address only contributing factors, you'll have a different-looking but structurally identical failure next time.

Step 4: Identify the Warning Signs That Were Ignored

Every failure has precursors. In hindsight, they're obvious. The value of this step is making them obvious prospectively.

Ask:

  • At what point was the negative outcome predictable?
  • What signals were visible at that point?
  • Who saw them? What happened when they raised them?
  • Why weren't they acted on?

Common patterns:

  • Signal was raised but dismissed by a senior person
  • Signal wasn't raised because nobody felt safe saying it
  • Signal was seen but no one had clear ownership to act on it
  • Data was available but nobody was looking at it
  • The team was too optimistic to take negative signals seriously

This step is particularly important for systemic issues — "we didn't feel safe raising the concern" is a much deeper root cause than "the deal qualification was off."

Step 5: Distinguish What Was in Control vs. Out of Control

Some failures happen despite correct decisions. Some happen because of incorrect decisions. Knowing the difference prevents both overcorrection and undercorrection.

  • In control: Process, criteria, team capability, resource allocation, decisions made
  • Out of control: Market conditions, customer decisions, competitor actions, macro events

For things out of control: what can be done to be more resilient to similar events? For things in control: what specifically needs to change?

Warning: "It was outside our control" is sometimes used to avoid accountability. Be rigorous.

Step 6: Build the Change Register

Every post-mortem ends with a change register — specific commitments, owned and dated.

Bad action items:

  • "We'll improve our qualification process"
  • "Communication will be better"
  • "We'll be more rigorous about forecasting"

Good action items:

  • "Ravi owns rewriting qualification criteria by March 15 to include champion identification as hard requirement. New criteria reviewed in weekly sales standup starting March 22."
  • "By March 10, Elena adds deal-slippage risk flag to CRM for any open opportunity >60 days without a product demo"
  • "Maria runs a 30-min retrospective with enterprise sales team every 6 weeks starting April 1, reviews win/loss data"

For each action:

  • What exactly is changing?
  • Who owns it?
  • By when?
  • How will you verify it worked?

Step 7: Verification Date

The most commonly skipped step. Post-mortems are useless if nobody checks whether the changes actually happened and actually worked.

Set a verification date: "We'll review whether qualification criteria have been updated and whether deal slippage rate has improved at the June board meeting."

Without this, post-mortems are theater.


Post-Mortem Output Format

EVENT: [Name and date]
EXPECTED: [What was supposed to happen]
ACTUAL: [What happened]
IMPACT: [Quantified]

TIMELINE
[Date]: [What happened or was visible]
[Date]: ...

5 WHYS
1. [Why did X happen?] → Because [Y]
2. [Why did Y happen?] → Because [Z]
3. [Why did Z happen?] → Because [A]
4. [Why did A happen?] → Because [B]
5. [Why did B happen?] → Because [ROOT CAUSE]

ROOT CAUSE: [One clear sentence]

CONTRIBUTING FACTORS
• [Factor] — how it contributed
• [Factor] — how it contributed

WARNING SIGNS MISSED
• [Signal visible at what date] — why it wasn't acted on

WHAT WAS IN CONTROL: [List]
WHAT WASN'T: [List]

CHANGE REGISTER
| Action | Owner | Due Date | Verification |
|--------|-------|----------|-------------|
| [Specific change] | [Name] | [Date] | [How to verify] |

VERIFICATION DATE: [Date of check-in]

The Tone of Good Post-Mortems

Blame is cheap. Understanding is hard.

The goal isn't to establish that someone made a mistake. The goal is to understand why the system produced that outcome — so the system can be improved.

"The salesperson didn't qualify the deal properly" is blame. "Our qualification framework hadn't been updated when we moved upmarket, and no one owned keeping it current" is understanding.

The first version fires or shames someone. The second version builds a more resilient organization.

Both might be true simultaneously. The distinction is: which one actually prevents recurrence?

Related skills

B

Business Growth Toolkit

alirezarezvani

4 business growth agent skills and plugins for Claude Code, Codex, Gemini CLI, Cursor, OpenClaw. Customer success (health scoring, churn), sales engineer (RFP), revenue operations (pipeline, GTM), contract & proposal writer. Python tools (stdlib-only).

MIT
R

Revenue Pipeline Analyzer

alirezarezvani

Analyzes sales pipeline health, revenue forecasting accuracy, and go-to-market efficiency metrics for SaaS revenue optimization. Use when analyzing sales pipeline coverage, forecasting revenue, evaluating go-to-market performance, reviewing sales metrics, assessing pipeline analysis, tracking forecast accuracy with MAPE, calculating GTM efficiency, or measuring sales efficiency and unit economics for SaaS teams.

E

Executive Stress-Test Mentor

alirezarezvani

Adversarial thinking partner for founders and executives. Stress-tests plans, prepares for brutal board meetings, dissects decisions with no good options, and forces honest post-mortems. Use when you need someone to find the holes before the board does, make a decision you've been avoiding, or understand what actually went wrong.

MIT
O

Org Change Management

alirezarezvani

Framework for rolling out organizational changes without chaos. Covers the ADKAR model adapted for startups, communication templates, resistance patterns, and change fatigue management. Handles process changes, org restructures, strategy pivots, and culture changes. Use when announcing a reorg, switching tools, pivoting strategy, killing a product, changing leadership, or when user mentions change management, change rollout, managing resistance, org change, reorg, or pivot communication.

MIT