Operating Fast & Slow
How Daniel Kahneman and Amos Tversky's Cognitive Biases Shape Every Business Decision (And How to Solve For These Blind Spots)
New to Operating by John Brewton?
Check out these reader favorites:
The Strategy Execution Gap:
Why Smart Companies Still Fail
Every founder, operator, and executive knows the feeling: A winning strategy is developed in boardrooms, only to fizzle out in the trenches of execution. You’ve seen timelines slip, teams build consensus around flawed ideas, and innovative competitors seemingly materialize out of nowhere to disrupt your market.
This isn’t a random failure. It’s systematic—a $4 trillion annual crisis. The real problem? We all run on a flawed operating system for thinking. Until companies learn the architecture of how decisions are actually made, the strategy-execution gap will persist.
The Intellectual Revolution
From Rational Man to Behavioral Reality
Until the 1970s, economics assumed “Homo Economicus”—the perfectly rational actor seeking to maximize expected utility. But two Israeli psychologists, Daniel Kahneman and Amos Tversky, decided to test these assumptions with rigorous experiments. Their findings became foundational to the field of Behavioral Economics:
Heuristics and Biases: Humans rely on mental shortcuts that yield systematic errors.
Prospect Theory: We value losses more than equivalent gains, a central pillar in how people actually make choices.
This body of work led to a Nobel Prize in Economics for Kahneman (2002), forever changing our understanding of judgment, risk, and decision-making in business, finance, and society. Behavioral economics now underpins strategy, marketing, product design, and management across the Fortune 500.
This article provides a breakdown of primary ideas uncovered by the two academics and applies their ideas to the activities underpinning the operation of most any business. If you are an operator, founder or leader, we hope you find this useful.
The Dual Operating System
System 1: Fast, Automatic, and Intuitive Thought
System 1 operates as the brain’s rapid, effortless, and automatic processing engine, generating impressions, intuitions, and snap judgments without conscious deliberation or voluntary control. This fast-thinking system evolved as an adaptive mechanism for survival, enabling our ancestors to make split-second decisions about threats, social hierarchies, and resource opportunities, cognitive tasks where speed mattered more than perfect accuracy.
The intellectual lineage of System 1 traces back through the development of dual-process theory: from William James’s 19th-century distinction between “associative” and “true reasoning,” through Posner and Snyder’s 1975 formalization of automatic versus controlled processing, to Stanovich and West’s coining of the “System 1/System 2” terminology in 2000, which Kahneman popularized in Thinking, Fast and Slow.
System 1 excels in high-validity environments with reliable patterns and rapid feedback, recognizing faces, reading social cues, driving familiar routes, or making expert judgments in well-practiced domains where thousands of repetitions have compressed complex pattern recognition into instant intuition. However, precisely because System 1 prioritizes cognitive ease and coherent narratives over logical rigor, it systematically produces the heuristics and biases that Kahneman and Tversky documented: it substitutes easier questions for harder ones (attribute substitution), treats available information as complete (WYSIATI), anchors on initial numbers, and cannot be turned off even when we know it’s leading us astray.
System 2: Slow, Deliberate, and Analytical Thought
System 2 encompasses the controlled, effortful mental operations that require conscious attention, including complex calculations, logical reasoning, comparing multiple alternatives, checking quality, and overriding System 1’s automatic impulses when motivation and capacity allow. While System 1’s evolutionary heritage optimized for speed in survival contexts, System 2 represents the more recent cognitive architecture enabling abstract thinking, long-term planning, hypothetical reasoning, and systematic problem-solving, capabilities essential for modern strategic decisions but metabolically expensive and easily depleted.
The historical development of System 2 as a construct parallels dual-process theory’s evolution: Posner and Snyder characterized controlled processes as capacity-draining and capable of both facilitating expected signals and inhibiting unexpected ones, while later researchers like Stanovich emphasized that Type 2 thinking engages working memory resources and correlates with fluid intelligence.
System 2’s critical limitation is what Kahneman calls “laziness.” It defaults to endorsing System 1’s conclusions unless something clearly doesn’t fit or unless explicit effort is invested in deliberate analysis. This is why most people identify with System 2 (it feels like “us”) even though System 1 generates most of our judgments. In organizational contexts, this translates to a persistent problem: companies default to fast, intuitive System 1 decision-making, relying on familiar routines, anchoring on last year’s plans, accepting coherent narratives without scrutiny, until catastrophic failure forces the costly, reactive activation of System 2 analysis, when the much cheaper solution would have been designing triggers that automatically engage deliberate thinking whenever decisions involve high stakes, irreversibility, novelty, or genuine uncertainty.
Organizational Parallel
Company System 1: “How we always do things”—the default rules, ingrained routines.
Company System 2: Deliberate planning, strategy reviews, structural process changes.
Action Point:
Create triggers that force System 2 activation (decision thresholds, structured reviews, written analyses prior to discussion) whenever stakes or complexity are high.
The Twelve Biases That Kill Strategy
BIAS 1: The Planning Fallacy
What It Is:
We systematically underestimate the length of projects, their cost, and the risks they’ll encounter while simultaneously overestimating their benefits and our ability to execute. This is a predictable pattern that persists even when we know that similar projects in the past took far longer than planned.
The Cognitive Mechanism:
The planning fallacy stems from taking the “inside view”—focusing on the unique features of your specific project and constructing a plan based on best-case assumptions. We imagine how the project should unfold if everything goes according to plan, ignoring the reality that unexpected obstacles always arise. Meanwhile, we neglect the “outside view”—statistical data on how long similar projects actually took at comparable organizations.
Real-World Cost:
Bent Flyvbjerg’s research on megaprojects revealed that 90% run over budget with an average overrun of 28%. In business: product launches that slip 6+ months, destroying market timing; digital transformations that run 2x over budget, consuming resources needed elsewhere; strategic initiatives abandoned mid-execution because timelines were so far off that leadership's patience expired before results materialized.
The Fix: Reference Class Forecasting
The antidote to the planning fallacy is forcing an “outside view” before finalizing plans:
Build a reference class database: Track actual timelines, costs, and outcomes from past initiatives. Include comparable projects from other companies when possible.
Require external benchmarking: Don’t accept timelines or budgets without data showing how long similar projects took at peer organizations. If a team proposes a 6-month timeline, they must show that similar projects have been completed in 6 months.
Apply statistical adjustment: Start with the average time for the reference class, then adjust for specific factors that make your project faster or slower. The adjustment should be modest—most projects aren’t as unique as we think.
Build explicit buffers: Add 1.5-2x multipliers to bottom-up estimates based on historical variance in your reference class.
Tactical Implementation:
Business case template: Add mandatory section: “Reference class: 3 comparable projects with actual timelines and costs”
Review process: Planning reviews must include someone who asks: “What’s the base rate for this type of initiative?”
Tracking discipline: Measure and publish estimation accuracy quarterly. Make calibration a team competency.
Honest postmortems: When projects run long, document whether it was unique circumstances or planning fallacy, then update forecasting models.
Case Study:
The UK’s Department for Transport implemented reference class forecasting for major infrastructure projects after decades of cost overruns. By starting with historical data on similar projects and adjusting modestly, they achieved dramatically more accurate forecasts and better project outcomes.
BIAS 2: Overconfidence & The Illusion of Validity
What It Is:
Our subjective confidence in judgments reliably exceeds our objective accuracy. In studies where people claimed 100% certainty, they were wrong about 20% of the time. This manifests in three forms: overestimating our actual performance, believing we’re better than average (93% of drivers rate themselves above median), and expressing unwarranted precision in probability estimates.
The Cognitive Mechanism:
Overconfidence emerges because the subjective feeling of confidence is determined by the coherence of the story System 1 constructs, not by the quality of evidence. If an explanation comes easily to mind and fits together smoothly, we feel confident regardless of whether we have good reasons for that confidence. This explains the illusion of validity: we express high confidence in predictions even when our track record is terrible, because each individual prediction feels compelling.
Real-World Cost:
Market entry disasters are driven by overconfident assessments of competitive position. M&A value destruction occurs because acquirers overestimate synergies and integration capabilities. Startup failures are linked to entrepreneurs’ overconfidence in their ability to execute. Executive decisions that ignore warning signs because leadership is certain their strategy is correct.
The Fix: Track Predictions vs. Outcomes
The solution is creating feedback loops that calibrate confidence to actual accuracy
Decision journals: Record predictions with explicit probability estimates before outcomes are known. For example: “70% confident this feature will increase engagement >15% within 3 months.”
Quarterly calibration reviews: Compare predictions to outcomes. Were your “70% confident” predictions actually right 70% of the time? Or were they right 40% of the time, indicating overconfidence?
Team-level tracking: Aggregate individual journals to identify patterns. Does your sales team consistently overestimate close rates? Does engineering consistently underestimate complexity?
Process adjustments: Use calibration data to adjust future estimates. If product typically takes 1.8x longer than estimated, apply that multiplier to new estimates.
Tactical Implementation:
Forecasting discipline: Any significant forecast requires a recorded confidence level and a review date
Calibration dashboards: Publish accuracy metrics by team and domain
Hiring for humility: In interviews, ask candidates to describe times they were confidently wrong and what they learned
Reward accuracy: Recognize teams that make well-calibrated predictions, not just those who hit targets
Intellectual honesty rituals: Start leadership meetings with “What was I wrong about this quarter?”
Case Study:
Philip Tetlock’s research on forecasting identified “superforecasters”—people who consistently make more accurate predictions than experts. The distinguishing characteristics: they track their accuracy meticulously, update beliefs frequently based on new evidence, avoid overconfidence, and think in probabilities rather than certainties. Organizations can build these capabilities systematically through calibration training and tracking.
BIAS 3: WYSIATI (What You See Is All There Is)
What It Is:
System 1 builds the most coherent story possible from available information while completely ignoring information that’s absent. The acronym WYSIATI captures this perfectly: “What You See Is All There Is.” We don’t spend cognitive energy thinking about what we don’t know. System 1 treats available information as if it’s complete and builds certainty from inadequacy.
The Cognitive Mechanism:
WYSIATI occurs because System 1 doesn’t signal absence. It only processes presence. When information is missing, there’s nothing for System 1 to flag. Paradoxically, having less information can make us more confident because there are fewer pieces to reconcile, making coherent stories easier to construct. This is why venture capitalists can feel supremely confident in startups after a one-hour pitch, or why executives build elaborate strategies based on current customer feedback while being completely blind to non-customers.
Real-World Cost:
Strategic blind spots that leave companies vulnerable to disruption from competitors they never analyzed. Hiring mistakes occur because interview panels assess visible skills while ignoring critical capabilities they didn’t probe. Market failures occur because teams deeply understand current customers but have no insight into why non-customers aren’t buying. Product decisions are driven by vocal users while the silent majority’s needs go unaddressed.
The Fix: Systematic Perspective Widening
The solution is forcing explicit attention to what’s missing
Pre-decision checklist: Every major decision must include: “What information don’t we have? What questions haven’t we asked? What perspectives are missing from this discussion?”
Diverse input mandates: Include people from different functions, geographies, or backgrounds who bring different information sets and see different things
Devil’s advocate with teeth: Not a token role but a formal position with authority to halt decisions until missing information is addressed
Scenario planning: Force consideration of futures that don’t fit current assumptions by systematically varying key variables
Tactical Implementation:
Strategy template: Three-column analysis—”Known knowns, Known unknowns, Potential unknown unknowns”
Cross-functional review: Major decisions require sign-off from someone outside the core team
Customer research discipline: For every 10 customer interviews with users, conduct 3 with people who tried your product and didn’t buy
Competitive intelligence: Track companies in adjacent markets, not just direct competitors
Information gap bounties: Reward team members who identify critical missing information before decisions are made
Case Study:
Nokia’s collapse illustrates WYSIATI catastrophically. They had deep expertise in mobile phones and dominated that market. But their knowledge set was entirely about the mobile phone business as it existed. When smartphones emerged—requiring different capabilities in software, ecosystems, developer relations—Nokia was systematically blind. Their coherent story about mobile phones didn’t include signals about a fundamentally different category emerging. They weren’t stupid; they were operating with WYSIATI in a context where what they didn’t see was everything that mattered.
BIAS 4: Anchoring
What It Is:
Initial numbers, even when random or irrelevant, disproportionately influence all subsequent judgments. In the classic demonstration, people were shown a random number from a wheel spin and incorporated that number into estimates of unrelated questions. Those who saw “10” estimated 25% of African nations were in the UN; those who saw “65” estimated 45%—a massive difference from an obviously irrelevant anchor.
The Cognitive Mechanism:
Anchoring works because System 1 treats the anchor as potentially relevant information, and System 2’s adjustment from that anchor is typically insufficient. We start at the anchor and adjust, but we stop adjusting too soon, leaving final estimates closer to the anchor than they should be. This happens even when we know the anchor is random.
Real-World Cost:
Negotiations are anchored by first offers, leaving money on the table or paying premiums. Budgets are anchored on last year’s numbers regardless of changed circumstances. Valuations are anchored by asking prices in M&A, leading to overpayment. Salary offers are influenced by the candidate’s current compensation rather than market value. Strategic plans are anchored on last year’s plan rather than first-principles thinking.
The Fix: Set Your Own Anchors First
The solution is generating internal estimates before exposure to external anchors
Independent estimation: Require teams to develop budgets, timelines, and valuations independently before seeing comparable data or proposals
Multiple anchors: When you must use external data, deliberately use both high and low anchors to bracket estimates rather than letting a single number dominate
Question anchor sources: Force explicit discussion: “Where did this number come from? Why should we use it as a starting point?”
Zero-based approaches: Periodically rebuild from first principles rather than adjusting prior numbers
Tactical Implementation:
Compensation: Research market rates independently before candidates share current salary expectations
Budgeting: Require zero-based justification for at least 20% of budget categories annually
Negotiations: Prepare your anchors before meetings; go first when possible to set favorable reference points
Vendor evaluation: Generate internal value estimates before receiving proposals
M&A: Develop a standalone valuation before seeing the asking price or banker's books
Case Study:
Experienced negotiators understand anchoring and use it deliberately. In real estate, the first offer strongly influences final sale price. Savvy sellers set high (but not absurd) asking prices to anchor negotiations. Savvy buyers either refuse to negotiate from seller anchors or set their own anchors with pre-emptive offers. The key is recognizing that whoever controls the anchor controls the negotiation frame.
BIAS 5: Loss Aversion & Prospect Theory
What It Is:
Losses loom larger than equivalent gains; the pain of losing $100 is approximately twice as strong as the pleasure of gaining $100. This asymmetry profoundly shapes decision-making. We evaluate outcomes relative to reference points (usually our current state) rather than in absolute terms, and we are risk-averse when facing gains but risk-seeking when facing losses.
The Cognitive Mechanism:
Loss aversion is rooted in evolutionary psychology—avoiding losses was more critical to survival than capturing gains. Prospect Theory formalized this: the value function is steeper for losses than gains, and people exhibit diminishing sensitivity (the difference between $100 and $200 feels larger than between $1,100 and $1,200). Additionally, we misweight probabilities, overweighting low-probability events (buying lottery tickets) and underweighting high-probability events.
Real-World Cost:
Holding failing investments too long to avoid realizing losses. Status quo bias occurs when change feels like giving up the current position. Risk-averse innovation in profitable businesses because gains aren’t compelling enough. Escalation of commitment to failing projects. Resistance to organizational change occurs because people focus on what they’ll lose rather than what they’ll gain.
The Fix: Reframe Reference Points
The solution is deliberately reframing to make inaction feel like a loss
Opportunity cost framing: “Continuing this failing initiative costs us X in foregone opportunities that capital could fund”
Future reference points: Frame current state as a temporary waypoint, not a permanent reference. “We’re not giving up our position; we’re moving to a better one”
Portfolio perspective: Evaluate collections of decisions rather than individual ones to average out loss aversion’s impact
Separation of economic and emotional: Acknowledge emotional loss while focusing the decision on economic logic
Tactical Implementation:
Portfolio reviews: Quarterly “kill or double down” forcing functions that frame continuing underperformers as opportunity costs
Innovation accounting: Measure and publicize the cost of NOT innovating (market share loss to new entrants, margin compression)
Change management: Lead with what people will lose by NOT changing before describing the benefits of change
Hiring: Frame open roles as “We’re losing X capability every day this stays unfilled” rather than “Should we spend on headcount?”
Celebrate smart stops: Create cultural norms that reward ending failing projects quickly
Case Study:
Amazon’s Jeff Bezos famously distinguishes between Type 1 (irreversible) and Type 2 (reversible) decisions, encouraging fast decision-making on reversible choices. The company regularly kills products and projects without stigma, framing them as learning investments rather than failures to avoid. This cultural norm directly counteracts loss aversion and sunk cost fallacy, enabling faster adaptation than competitors paralyzed by fear of realizing losses.
BIAS 6: The Sunk Cost Fallacy
What It Is:
We continue investing in projects, people, or strategies because of past investment—even when future prospects are dismal. The rational approach is to consider only future costs and benefits, treating past investment as irretrievable regardless of the current decision. But humans don’t think this way. “We’ve already spent $2M; we can’t stop now” is economically nonsensical but psychologically compelling.
The Cognitive Mechanism:
Sunk cost sensitivity stems from loss aversion—stopping feels like accepting a loss, while continuing offers the possibility (however remote) of recovering investment. Commitment consistency also plays a role: we want our past decisions to have been correct, so we persist in validating them. Public commitment intensifies this pressure.
Real-World Cost:
Failed product development continued for years, consuming resources that could fund successful initiatives. Bad acquisitions are throwing “good money after bad” as organizations escalate commitment rather than cut losses. Underperforming executives are retained because of investment in recruiting and onboarding. Legacy technology platforms maintained long past useful life because of cumulative investment.
The Fix: Future-Only Evaluation
The solution is forcing decision-makers to ignore past investments
Zero-based question: “If we hadn’t already invested X, would we start this project today with our current knowledge?”
Explicit sunk cost acknowledgment: “The $2M is gone regardless of our decision. What’s our best path forward from here?”
Regular portfolio reviews: Quarterly evaluation of all initiatives, independent of past investment, with clear kill/continue/double-down decisions
Stage-gate funding: Each phase requires fresh justification based on updated information, not automatic continuation
Tactical Implementation:
Investment memos: Include an explicit statement: “This go/no-go decision is independent of prior investment”
Stage funding: Structure major initiatives as a series of options where each phase earns the right to the next phase based on outcomes
Quarterly portfolio: Force-rank all initiatives; kill bottom 10-20% regardless of sunk costs
Celebrate smart stopping: Create awards and recognition for teams that kill failing projects quickly based on evidence
Reframe: Call past investment “learning costs” rather than losses to psychologically separate from the current decision
Post-stop success stories: Track and publicize resources redeployed from killed projects to successful ones
Case Study:
Google’s willingness to kill products despite massive investment illustrates organizational discipline against sunk costs. Google+, Google Reader, and dozens of other products were shuttered after consuming significant resources. The cultural norm: past investment buys learning, not automatic continuation. This discipline enables faster resource reallocation and prevents the “zombie project” phenomenon that plagues slower-moving competitors.
BIAS 7: Confirmation Bias & Groupthink
What It Is:
We seek, favor, interpret, and remember information that confirms existing beliefs while ignoring or discounting contradictory evidence. In group settings, this combines with social pressure to create groupthink, the prioritization of consensus over critical evaluation, where dissent is suppressed and alternatives are not seriously examined.
The Cognitive Mechanism:
Confirmation bias occurs because confirmatory evidence produces cognitive ease—it feels right, requires less effort to process, and doesn’t challenge existing mental models. Disconfirming evidence creates cognitive dissonance, which is uncomfortable. In groups, social dynamics amplify this: dissent carries interpersonal risks, leadership signals often indicate preferred conclusions, and the appearance of consensus prevents individuals from voicing doubts.
Real-World Cost:
Strategic plans that go unchallenged until market disproves them. Hiring decisions where interview panels validate initial impressions while ignoring warning signs. Product development that builds features users say they want but don’t actually use. Market research that confirms what executives already believe. Due diligence that validates deals the leadership wants to do. Crisis situations where teams fail to update strategies as evidence accumulates that they’re failing.
The Fix: Institutionalize Dissent
The solution is making the challenge structural rather than heroic
Pre-mortem analysis: Before launch, imagine the initiative failed spectacularly. Have team members independently list causes. This legitimizes voicing concerns.
Red team: Designate individuals or teams with explicit authority and responsibility to challenge plans. Make this a rotating assignment so it’s not always the same contrarian.
Anonymous input: Collect initial opinions before group discussion to prevent premature convergence and authority bias.
Devil’s advocate: Formal role assignments to argue against proposals, with explicit protection from retaliation.
Tactical Implementation:
Major decisions: Require pre-mortem for any >$500K investment or irreversible choice
Hiring: Assign one interviewer explicit role of finding disqualifying information rather than confirming fit
Strategy reviews: Agenda item: “What would have to be true for this strategy to fail?”
Product development: Interview non-buyers explicitly: “Why didn’t you choose our product?”
M&A: Independent red team reviews deal thesis before board approval
Meeting design: Initial votes before discussion; junior speaks before senior; cold calling for dissent
Reward dissent: Publicly recognize constructive challenges that improved decisions
Case Study:
Intel’s Andy Grove institutionalized “disagree and commit” culture. Vigorous debate was not just permitted but required before major decisions. Anyone could challenge anyone’s ideas if they brought evidence and logic. But once a decision was made, the entire organization committed fully to execution. This prevented groupthink during deliberation while maintaining decisiveness in action—the optimal balance.
BIAS 8: Hindsight Bias
What It Is:
After an outcome is known, we perceive it as having been more predictable than it actually was, It’s the “I knew it all along” phenomenon. This systematic distortion of memory makes past events seem inevitable and leads to unfair evaluation of decision-makers based on outcomes rather than process quality.
The Cognitive Mechanism:
Once we know an outcome, that knowledge is automatically integrated into our understanding of the past. The uncertainty that characterized the original decision gets forgotten. We construct coherent backward narratives that make the outcome seem obvious, ignoring the many alternative paths that could have occurred. This is stronger when outcomes are negative and more severe.
Real-World Cost:
Unfair performance evaluations, where leaders are blamed for outcomes that couldn’t have been foreseen, given the information available at the time. Failure to learn from decisions because we misremember our original reasoning and certainty. Risk aversion occurs because unlucky outcomes get punished even when the decision-making process is sound. Loss of institutional knowledge as “failed” decision-makers are pushed out. Inability to distinguish good decisions with bad outcomes from bad decisions with good outcomes.
The Fix: Judge Process, Not Just Outcome
The solution is separating decision quality from outcome quality
Pre-decision documentation: Write down reasoning, predictions, uncertainties, and confidence levels before the outcome is known
Process review: When evaluating decisions, ask “Was the process sound given the information available at the time?” not “Was the outcome good?”
Outcome vs. decision quality matrix: Explicitly separate lucky bad decisions (poor process, good outcome) from unlucky good decisions (sound process, bad outcome)
Focus on decisions within control: Evaluate the quality of analysis, alternatives considered, and information gathering—not factors outside the decision-maker’s control
Tactical Implementation:
Decision journals: Mandatory for major decisions; reviewed as pairs (decision + outcome) rather than outcomes alone
Post-mortems: Start with “What did we think would happen and why?” before analyzing what actually happened
Performance reviews: Include section on decision quality independent of outcomes: “What was the process? What information was available? Were alternatives considered?”
Celebrate well-reasoned failures: Recognize decisions where process was excellent but outcomes were unlucky (demonstrating learning culture)
Track leading indicators: Evaluate decisions based on intermediate signals, not just final outcomes
Historical record: Maintain archives of decision documents to prevent memory revision
Case Study:
Annie Duke, professional poker player turned decision strategist, argues forcefully for “resulting”—the mistake of judging decision quality by outcomes. In poker, good decisions sometimes produce bad outcomes (you made the mathematically correct play but got unlucky). Duke’s framework: judge decisions by whether they maximized expected value given information at the time, not by whether they happened to work out. Organizations that adopt this mindset learn faster because they can identify truly bad processes even when outcomes were lucky.
BIAS 9: Base Rate Neglect
What It Is:
We ignore or underweight statistical base rates (prior probabilities) in favor of specific case information. When shown compelling individual details, we abandon general population statistics even though those statistics should be the starting point for any probability judgment.
The Cognitive Mechanism:
Base rate neglect occurs because specific descriptions are vivid, concrete, and representative, while statistical information feels abstract and generic. The representativeness heuristic overwhelms statistical reasoning. We judge probability by how well the description matches our prototype rather than by actual population frequencies.
Real-World Cost:
Startup investing where compelling founder stories override base rates (90% of startups fail). Hiring decisions where impressive interviews trump statistical success rates for candidate profiles. Market entry based on “our unique advantages” while ignoring base rates showing most entrants fail. Medical testing where positive results get overweighted relative to disease prevalence. Due diligence that focuses on deal-specific narratives while ignoring category-level failure rates.
The Fix: Always Start with Base Rates
The solution is mandating statistical analysis before case-specific judgment
Ask first: “What’s the base rate for this type of situation/decision/outcome?”
Bayesian updating: Start with prior probability from base rate, then adjust based on specific evidence. Never replace the base rate entirely.
Reference class requirement: Every proposal must include data on success rates for comparable situations
Document adjustment rationale: If claiming your situation is different from base rate, explicitly explain why with evidence
Tactical Implementation:
Investment decisions: “What percentage of companies with this profile/stage/market succeed? What evidence makes us believe this one is different?”
Hiring: “What percentage of people with this background excel in this role?” Then adjust for individual performance data
Strategic planning: “What percentage of companies attempting this strategy succeed?” before assuming you’re the exception
Product launches: “What percentage of products in this category achieve target adoption?” before modeling your forecast
Build base rate library: Maintain a database of outcomes by category to enable quick base rate lookup
Bayesian training: Teach teams explicit probability updating rather than replacing base rates
Case Study:
Y Combinator’s investment approach explicitly uses base rates. They know most startups fail. Their process is to start with a base failure rate and use specific information (team, traction, market) to identify outliers likely to beat the base rate. They don’t let compelling founder stories override statistical reality—they look for concrete evidence of differentiation from the base case. This discipline has produced exceptional returns precisely because it doesn’t fall for narrative over numbers.
BIAS 10: Narrative Fallacy & Illusion of Understanding
What It Is:
We construct simple, coherent causal stories to explain complex outcomes, systematically underestimating the role of luck and randomness while overemphasizing skill and intentionality. These retrospective narratives create an illusion that the past was predictable and that we understand causal mechanisms better than we actually do.
The Cognitive Mechanism:
Humans are story-making machines. Coherent narratives feel true and are easily remembered, while statistical complexity and multiple interacting factors feel unsatisfying. We focus on striking events that happened while ignoring countless events that didn’t happen but could have. The result: oversimplified cause-and-effect stories that attribute success to talent and strategy, while reality involved significant luck and contingency.
Real-World Cost:
Attributing success to specific decisions or leaders when luck played a major role, leading to overconfidence in future similar decisions. Failing to learn from failures because wrong causal stories are constructed (scapegoating individuals rather than examining systemic factors). Case study worship where business books lionize companies based on coherent narratives that collapse when circumstances change. Strategy formulation based on misunderstood past successes. Leadership hiring based on narrative credentials rather than systematic evidence.
The Fix: Complexity Acknowledgment
The solution is forcing consideration of alternative explanations and the role of chance
Multiple explanations: “What are five different explanations for this outcome?” Force plural hypotheses rather than single narratives
Luck vs. skill decomposition: “How much was due to decisions within our control vs. external factors?”
Counterfactual thinking: “What would have happened if key contingencies went differently?”
Checklist approaches: Use systematic evaluation criteria rather than narrative persuasiveness
Tactical Implementation:
Success analysis: “What were the 10 things that had to go right for this to succeed? How many were within our control?”
Failure analysis: “What were all contributing factors?” with explicit separation of controllable vs. uncontrollable
Case study skepticism: Treat business book success stories as entertainment, not data. Ask “What role did timing/luck play?”
Post-mortems: Include “alternative history” exercise—how close were we to different outcomes?
Leader evaluation: Distinguish outcomes from decision quality; assess decisions where luck was removed
Strategic planning: Scenario analysis forcing multiple plausible futures rather than single predicted path
Case Study:
Phil Rosenzweig’s “The Halo Effect” systematically demonstrates narrative fallacy in business writing. Companies like Cisco, ABB, and others were celebrated for their culture, strategy, and leadership when performing well. When performance declined, the exact same factors were cited as weaknesses. The coherent stories about what drove success were just-so narratives constructed after the fact, not genuine insights into causation. The lesson: be deeply skeptical of simple explanations for complex organizational outcomes.
BIAS 11: The Endowment Effect
What It Is:
We value things more highly simply because we own them. In classic experiments, people given coffee mugs demanded roughly twice as much money to sell them as other people would pay to buy identical mugs. This “mere ownership” effect extends beyond possessions to ideas, processes, strategies, and people—the “Not Invented Here” syndrome.
The Cognitive Mechanism:
The endowment effect stems from loss aversion applied to possessions. Giving up something we own feels like a loss (which stings), while the money received feels like a gain (which is less compelling). This asymmetry in how we value losses versus gains makes us overvalue what we have. The effect is stronger for items “for use” (things we plan to consume or employ) than items “for exchange” (things we plan to trade).
Real-World Cost:
Keeping underperforming products, assets, or business units because “we’ve invested so much in building them.” Overvaluing internal solutions versus external alternatives (”we need to build this ourselves”). Resisting beneficial changes that require giving up familiar processes or systems. Retaining underperforming team members because of sunk investment in hiring and training. Pricing acquisitions based on internal view of value rather than market value.
The Fix: Outside Perspective
The solution is forcing valuation from a non-owner perspective
Hypothetical sale: “If we didn’t own this asset/use this process, would we buy/implement it at current market price?”
Fresh eyes: Have people not invested in the status quo conduct evaluations
Regular portfolio pruning: Systematically evaluate and eliminate bottom performers, forcing active choice to retain rather than passive drift
Market value discipline: Use external market prices rather than internal “strategic value” for asset decisions
Tactical Implementation:
Product portfolio: Annual review asking “Would we launch this product today knowing what we now know?”
Talent management: “If this person left tomorrow, would we fight hard to rehire them at market rate?”
Process review: “If new CEO arrived, what would they change?” Then make those changes without waiting for new CEO
Make vs. buy: Require that “build” decisions beat “buy” on objective criteria, not just “we prefer to own it”
Strategic review: Quarterly “zero-based” evaluation where each initiative must earn continued funding
Bring in outsiders: Include board members or advisors without emotional attachment in major retention/divestiture decisions
Case Study:
Private equity firms explicitly combat endowment effect through portfolio management discipline. They buy companies, improve them, and sell them based on objective return criteria. The cultural norm: nothing is sacred, every asset must earn its place in the portfolio, and external market value is the ultimate arbiter. This mindset enables faster resource reallocation than strategic corporate owners who fall prey to endowment effect and hold underperforming assets too long.
BIAS 12: The Peak-End Rule & Duration Neglect
What It Is:
People judge experiences based almost entirely on the emotional peak (most intense moment) and how the experience ended, largely ignoring total duration and average experience quality. A 90-second painful experience with a less painful ending is remembered as less negative than a 60-second experience with an abrupt painful ending, even though the longer experience contained more total pain.
The Cognitive Mechanism:
Memory cannot store every moment of an experience, so System 1 compresses experiences into representative samples. The peak and end are most cognitively accessible, so they dominate memory formation. Duration has surprisingly minimal impact—this “duration neglect” means a three-hour positive experience isn’t remembered as much better than a one-hour positive experience if peak and end are similar.
Real-World Cost:
Employee experience: A bad exit process (poor offboarding, tense final weeks) ruins memory of entire tenure, affecting referrals and boomerang recruiting. Customer experience: Last interaction (customer support issue, checkout friction) disproportionately affects lifetime value and Net Promoter Score, regardless of months of positive service. Project evaluation: A crisis in final weeks overshadows months of smooth execution, affecting team morale and learning. Product perception: Onboarding (first experience) and churn point (final experience) dominate overall product satisfaction ratings.
The Fix: Design for Memory
The solution is deliberately engineering peak moments and endings
Peak moments: Identify points of highest emotional intensity and invest in making them exceptional
Strong endings: Disproportionate attention to final experiences—last day of project, last interaction with customer, last week of employment
Recognize duration neglect: Don’t assume longer positive experiences automatically create better memories; focus on peaks and endings
Measure remembered experience: Survey customers/employees about their overall memory, not just current satisfaction
Tactical Implementation:
Employee lifecycle: Make final week exceptional (celebration, knowledge transfer, alumni network invitation), not just administrative
Customer journey: Map peak emotion points (purchase decision, first use, renewal) and ending (churn or expansion); invest heavily in these
Project closeouts: Celebrate completion with memorable events, don’t just disband teams quietly
Product design: Obsess over onboarding (first peak) and moments of delight (subsequent peaks)
Service recovery: When mistakes happen, create exceptional recovery experience (turns negative peak into positive one)
Offboarding: Create world-class exit experiences for churning customers (potentially recovers them as future customers)
Case Study:
Disney theme parks demonstrate peak-end mastery. They manage queue experiences to create peak moments (interactive elements, character appearances), control the final experience (exit through gift shop with positive memories fresh), and understand that duration of waiting matters far less than how that waiting feels at its peak and end. Visitors remember the experience as magical even if they spent hours waiting, because the peaks were exceptional and the endings were controlled.
Building Bias-Resistant Organizations: The Comprehensive Strategic Framework
Organizations fail because systems amplify rather than counteract cognitive biases. The most sophisticated strategy dies in execution when teams fall prey to groupthink, when planning fallacies destroy timelines, when confirmation bias blinds leadership to disconfirming evidence. The solution isn’t exhorting people to “think better.” It’s building organizational architectures that make better thinking inevitable.
What follows is a three-pillar framework I’ve developed based on my study of Kahneman and Tversky’s work. This architecture is designed for embedding debiasing into your company’s operating system, transforming how decisions get made, how teams collaborate, and how learning compounds over time.
Pillar 1: Psychological Safety
Pillar 2: Structured Decision Architecture
Pillar 3: Measurement & Learning Loops
VIII. Closing: The Competitive Advantage of Better Thinking
Most companies compete on strategy, execution, and talent. Few compete on the quality of their decision-making architecture. As Kahneman and Tversky revealed, cognitive bias isn’t a flaw, it’s the default setting.
Companies that operationalize this truth, building safety, structure, and measurement into their system, will transform how execution impacts outcomes.
The result: Not perfect rationality, but dramatically better decisions and the foundation for enduring advantage.
Daniel Kahneman and Amos Tversky’s half-century legacy was to show us our operating system. Now, it’s your turn to update it for your company and teams.
IX. Resources & Quick Reference
Books: Thinking, Fast and Slow (Kahneman), Noise (Kahneman, Sibony, Sunstein), The Fearless Organization (Edmondson), Thinking in Bets (Duke)
Frameworks: SPADE, OKRs, Pre-mortems, Decision Journals
Research: Project Aristotle (Google), Five Dysfunctions (Lencioni), McKinsey’s Team Effectiveness, Bent Flyvbjerg’s megaproject forecasting
If you’d like to work together, I’ve carved out some time to work 1:1 each month with a few, top notch Founders and Operators. You can find the details here.
John Brewton documents the history and future of operating companies at Operating by John Brewton. He is a graduate of Harvard University and began his career as a Phd. student in economics at the University of Chicago. After selling his family’s B2B industrial distribution company in 2021, he has been helping business owners, founders and investors optimize their operations ever since. He is the founder of 6A East Partners, a research and advisory firm asking the question: What is the future of companies? He still cringes at his early LinkedIn posts and loves making content each and everyday, despite the protestations of his beloved wife, Fabiola, at times.










Love this WYSIATI (What You See Is All There Is) approach. Never hear of it.
John,
This quote aligned with my entire being: “System 1 treats available information as if it’s complete and builds certainty from inadequacy.“
As a Futurist who studies the Nature of Information and Uncertainty I could not agree more with this statement.
My three favorite concepts surrounding the inadequacy of information are:
1. VUCA (Volatility, Uncertainty, Complexity and Ambiguity) - this expands the Known/Unknown box model, the dimensions clarify what we can, should and ought to know.
2. Knightian Uncertainty- Richard Knight posits entrepreneurial journey is a risky undertaking and that even with all of the compute power, there exists unknowable, incalculable complexity.
3. Willful Blindness - one of my favorite Ted Talks by Margaret Hefferman where she offers a masterclass in information that rests outside of ourselves, yet inside our reach - the weaponized incompetence of refusing to address things we could, should and have a responsibility to know. (https://youtu.be/Kn5JRgz3W0o?si=x7iZk_aJtAewAbVV)
Ultimately I agree with your perspective but as with all things business, I believe the core that you’re highlighting is that humans are and have always been at the center of “the work” resulting in innumerable complications in our complex relationship with ourselves, eachother, the environment, and our machines.