New to Operating by John Brewton?
Check out these reader favorites:
Things break. Releases misfire, invoices double‑charge, shipments go missing, a VP replies‑all. In those moments, apologies are polite—but they do not move the business. Founders, operators, and team leaders win by turning breakdowns into velocity. That requires a habit: bring solutions, not apologies.
Apology acknowledges the past. A solution buys back the future.
The distinction matters because recovery is a race against three clocks: customer trust, team confidence, and compounding risk. Every minute you spend explaining intent is a minute you’re not stabilizing impact. Most companies lose ground not because of the initial mistake, but because of slow, unclear response. The market forgives outages; it punishes opacity.
🧠 Think like an operator. Your job in a miss is to reduce uncertainty and restore control. That means facts first, plan second, owners third, deadlines last. Everything else is noise.
The 5‑Part Operator Script
Use this any time something breaks, technical, financial, or people‑related.
It is short, boring, and effective.
1️⃣ What happened.
Facts only. No spin, no blame, no adjectives.2️⃣ Impact.
Who/what was affected, and for how long. Include scope and severity.3️⃣ Stabilize.
Immediate actions taken to stop the bleeding.4️⃣ Fix.
The corrective plan with named owners.5️⃣ When.
Deadline for the fix and the schedule for updates.
Operators don’t apologize their way to trust—they communicate clearly, act fast, and close the loop. Use the script above. Use these phrases below.
Stop Apologizing at Work (Cheat Sheet)
Here are 8 confident phrases that build respect and get results. Use them when you feel the urge to apologize.
1️⃣ Let me find out and get back to you.
2️⃣ Here’s a working idea we can test.
3️⃣ I need your input to move this forward.
4️⃣ This isn’t my domain, but here’s my take.
5️⃣ Thanks for your patience—here’s the update.
6️⃣ Here’s my perspective based on what I know.
7️⃣ Let me run an idea by you.
8️⃣ Thanks for walking me through this.
Why these work: they signal ownership, momentum, and partnership—exactly what customers and teams trust.
Remember: apology acknowledges the past. Solutions buy back the future. Train the phrases, drill the 5-part script, and publish a one-page incident template so everyone knows what “good” looks like under pressure.
Build the Muscle Before You Need It
You do not rise to the level of your goals; you fall to the level of your systems. Prepare the response system now, not during your next incident.
→ Define roles.
↳ Incident Commander runs the response and makes tradeoffs.
↳ Comms Lead writes updates to customers and internal channels.
↳ Fix Lead(s) own technical or process remediation.
↳ Scribe captures timeline, decisions, and data for the postmortem.
→ Create a one‑page incident template. Put it in your wiki. Pin it in Slack. Include the 5‑part script, severity levels, who to page, and how to escalate.
→ Instrument your metrics. Track three numbers:
↳ MTTA (mean time to acknowledge) — speed of awareness.
↳ MTTR (mean time to restore) — speed to stability.
↳ TTP (time to plan) — speed to a credible fix path.
If you can’t measure these, you can’t improve them. And if you don’t publish them, your team won’t believe improvement matters.
→ Run drills. Quarterly is fine; monthly is better. Simulate a billing error, a production outage, an HR complaint, a vendor failure. Practice the script. Rotate roles. Time everything. Debrief immediately.
→ Pre‑write customer updates. Draft the skeleton text for common incidents with blanks for specifics. Under pressure, good writing is rare; templates keep you clear and calm.
How Leaders Create a High‑Recovery Culture
Culture is not what you say at the all‑hands; it’s how you respond under load. The team will copy what they see you do.
→ Resist the story. In the moment, ban “should’ve” and “why didn’t we.” Stories can wait. Action cannot.
→ Make one person unmistakably in charge. Diffused responsibility is delayed recovery. The Incident Commander decides; everyone else inputs.
→ Communicate on a clock. Set an update cadence (e.g., every 30–60 minutes internally; hourly to customers on major incidents). Hit it even if the update is “No change; next update at 15:00.” Cadence beats content for trust.
→ Choose clarity over comfort. Tell customers the truth in plain language. Avoid euphemisms. Share the plan, not the panic.
→ Close the loop. When fixed, announce resolution and show the single change that will prevent recurrence. Trust compounds when you can point to an artifact—a test, a checklist, a new control—not just good intentions.
Use “Sorry” the Right Way
Apology is not banned; it’s just not the finish line. Use it to establish empathy, not to end the conversation.
Correct order:
Acknowledge impact (“We caused an outage that blocked your team for 37 minutes.”)
Apologize (“We’re sorry—this violated our standard.”)
Execute the script (facts, impact, stabilize, fix, when).
What customers hear: respect, not excuses. What teams feel: leadership, not fear.
The Postmortem That Actually Improves Performance
Once the fire is out, the work begins. The goal is learning, not theater.
Run a blameless postmortem within 3 business days. Invite the people who were involved, the decision‑makers, and a neutral facilitator.
Agenda:
→ Timeline. What happened, minute by minute. Only facts.
→ 5 Whys. Push beyond the immediate trigger to the system design, staffing, or incentives that made the error possible.
→ Contributing factors. Tools, workload, communication, environment.
→ One change per failure mode. Prefer simple, permanent system changes over heroics.
→ Owner + due date. Put it on a roadmap, not a wishlist.
→ Proof. How will we know the change worked? Decide the metric now.
Publish the postmortem internally. If the incident was customer‑visible, publish a version externally. Brevity and honesty beat PR varnish.
Templates and Simple Systems That Prevent Repeats
Keep prevention lightweight. You don’t need a six‑sigma program to avoid recurring pain.
→ Checklists for the critical 20%. Pre‑deploy checklist. Finance close checklist. Offer‑letter checklist. The cost is minutes; the savings are real.
→ Guardrails, not gates. Feature flags, rate limits, per‑customer throttles, automatic rollbacks. Make the safe path the default.
→ Single‑source runbooks. One URL per high‑risk workflow. Short, current, and owned.
→ Change logs with owners. “What changed?” should be answerable in one scroll.
→ Policy of small changes. Shorter branches, small PRs, smaller batch sizes. The smaller the blast radius, the faster the recovery.
💡 Boring systems beat brilliant rescues. You want to be the company that almost never needs a hero.
Why This Matters (More Than You Think)
Apologies lower anxiety for minutes. Solutions reduce risk for months. Systems prevent repeats for years. That compounding matters to revenue and retention:
→ Customers buy reliability. A visible recovery plan is a trust signal.
→ Employees crave competence. Clear response reduces stress and turnover.
→ Investors discount chaos. A calm, measured incident story tells them you are building a durable operator’s machine.
In practice, the most valuable capability at growth stage isn’t perfect strategy. It’s operational resilience under imperfect execution. A team that can diagnose, stabilize, and fix, over and over, will outpace a team that ships faster but stumbles on recovery.
Three moves to run this week:
✅ Create your one‑page incident template with the 5‑part script, roles, and an escalation path. Pin it where the team works.
✅ Run a 20‑minute drill on a realistic scenario. Time MTTA, MTTR, and TTP. Capture gaps and assign two improvements.
✅ Pick one recurring failure mode (e.g., fragile deploy, missed invoice step) and install a small, permanent system change to eliminate it.
What You’ll Get in Operating by John Brewton
This newsletter exists to make you a better operator. We focus on the repeatable plays that reduce risk and increase momentum:
→ Incident scripts and templates you can paste into Slack.
→ Blameless postmortem guides with examples.
→ Metrics that matter (with ranges to aim for at your stage).
→ Real case studies from operating companies—what failed, what changed, and what it produced.
→ Simple worksheets for runbooks, checklists, and change logs.
No fluff. Just practical operating moves you can deploy this week.
Bring solutions, not apologies. Model the behavior, write it down, and drill it. Your team will follow. Your customers will notice. That’s how trust compounds—and how companies last.
If you’d like to work together, I’ve carved out some time to work 1:1 each month with a few, top notch Founders and Operators. You can find the details here.
John Brewton documents the history and future of operating companies at Operating by John Brewton. He is a graduate of Harvard University and began his career as a Phd. student in economics at the University of Chicago. After selling his family’s B2B industrial distribution company in 2021, he has been helping business owners, founders and investors optimize their operations ever since. He is the founder of 6A East Partners, a research and advisory firm asking the question: What is the future of companies? He still cringes at his early LinkedIn posts and loves making content each and everyday, despite the protestations of his beloved wife, Fabiola, at times.






Clear communication and fast action are absolutely essential for maintaining trust and momentum. Nice read here..thanks for the share...
John, I agree, sorry is not a solution, but it is the start of the conversation :)