Operating Charts: How to Measure AI-Bubble Risk
A Visual Guide to the Economic Structure of Systemic Risk
PAID SUBSCRIBER PREVIEW EXCLUSIVE
Check out these reader favorites:
Introduction: From Argument to Application
In “This Is Not 1929 or 2008,” I argued that economic structure, not scale or sentiment, determines whether bubbles destroy economies or merely punish investors. The argument relied on specific data: market concentration, leverage ratios, banking exposure metrics, and infrastructure deployment figures.
This analysis unpacks those variable with the assistance of a new set of charts. Where the earlier piece presented the thesis, this one shows you how to read economic data yourself, which metrics matter for business decisions, which indicators warrant monitoring, and how to build your own risk assessment framework.
Most bubble analysis works backward: it starts with a conclusion, then marshals data to support it. This piece inverts that approach. We start with the data, understand what it reveals about structure, and then draw conclusions. Each chart represents months of academic research condensed into visual form. More importantly, each contains insights that should inform your thinking about capital allocation during market volatility.
The earlier article explained why structure matters. This shows you how to measure it.
The Operator’s Risk Dashboard: A Scoring Framework
Before examining individual charts, we need to understand the operating framework for assessing risk:
Three-Factor Risk Model
Total Risk Score:
0-10: Aggressive positioning, invest in business capabilities
11-20: Balanced approach, selective hedging
21-30: Defensive positioning, preserve liquidity
Current 2025 Scoring:
Amplification: 2/10 (equity-funded, low leverage)
Transmission: 3/10 (banking isolation intact)
Policy Constraint: 1/10 (Fed has full flexibility)
Total: 6/30 → Aggressive positioning warranted
The charts below explain how these scores are derived and what to monitor for changes.
Number One: Market Concentration
Understanding Correlation Risk
What the Data Shows
The Magnificent 7 stocks now represent 38% of S&P 500 market capitalization. This exceeds the 1929 utilities peak (18%) and the 2000 dot-com apex (29%). In the earlier article, I noted this concentration but focused on why it doesn’t automatically signal catastrophe. Here’s what concentration actually does to portfolio dynamics.
The Mechanism: How Concentration Amplifies Volatility
Market concentration doesn’t cause crises. It amplifies them. The mechanism operates through correlation breakdown. When portfolio weight concentrates in fewer names, diversification fails when you need it most.
AQR Capital’s research shows that equity risk dominates institutional portfolios regardless of asset class diversification when single-sector concentration exceeds 30%. The math is straightforward: if 38% of your index exposure moves together, and those seven stocks correlate at 0.85+ during stress, your effective diversification drops from 500 names to roughly 50 independent risk factors.
Wall Street Journal analysis of MFS Investment Management data reveals the causality: “Index concentration and passive ownership reaching all-time highs result from the same underlying cause: investor demand for stocks of companies with massive profit growth.” The AI-5 (Alphabet, Amazon, Meta, Microsoft, Nvidia) showed net income growth expectations 20 times higher than the S&P 493 in 2023.
Liquidity That Doesn’t Scale
Financial Times warns that concentration creates two key risks, not from the concentrated stocks themselves, but from liquidity that doesn’t scale during stress. Every dollar allocated to passive index funds depresses the cost of equity capital for the remaining 493 stocks, creating valuation disconnects.
When rotation occurs, liquidity in the Magnificent 7 will be sufficient. But the 493 stocks experiencing artificial valuation suppression may see violent moves as capital floods back. Bloomberg data shows the 493 stocks trade at 33% discount to historical norms relative to the concentrated seven.
What This Means for Your Business
Operational Implications:
Vendor Risk: If your key technology vendors are among the Magnificent 7, their stock volatility won’t impair their operational stability. Balance sheets are fortress-like. Unlike 2008, your cloud provider won’t disappear because its stock falls 50%.
Competitive Positioning: Concentration means AI capabilities are being built by a handful of players. Integration partnerships with these platforms offer leverage but create dependency risk if regulatory intervention forces breakups.
Capital Markets Access: If your company relies on equity markets for growth capital, understand that investor appetite for “the other 493” varies inversely with Magnificent 7 performance. Plan financing when rotation favors broader markets.
Monitoring Protocol
Weekly: Track Magnificent 7 performance dispersion (divergence signals unwinding concentration trade)
Monthly: Monitor passive fund flows (accelerating inflows increase concentration risk)
Quarterly: Review S&P 493 relative valuation (widening discount creates rotation risk)
Trigger for Action: If concentration exceeds 45% or passive flows reverse sharply, rotation is imminent, and defensive hedging becomes appropriate.
The 2026-2028 Scenarios
Bull Case (35% probability): AI revenue materializes faster than expected, justifying valuations. Concentration stabilizes at 35-40%. Magnificent 7 grow into multiples through earnings rather than multiple expansion.
Base Case (50% probability): Concentration mean-reverts to 28-32% over 18-24 months through rotation, not crash. Magnificent 7 returns lag broader market but remain positive. This provides attractive entry points in 493 stocks.
Bear Case (15% probability): Regulatory intervention or AI disappointment triggers rapid de-concentration. Magnificent 7 correct 40-60%, dragging indices down 20-30%. But the economic structure (low leverage, banking isolation) prevents systemic cascade.
CHART 2: Leverage as % of GDP - The Amplification Mechanism
Why This Is the Decisive Chart
In the earlier article, I argued leverage is the critical structural difference. Here’s a quantitative framework to assess when leverage becomes systemically dangerous.
NBER economist Òscar Jordà’s 140-year, 17-country analysis demonstrates that leverage consistently predicts crisis severity across all time periods and countries. But the mechanism matters more than the number. Leverage creates amplification: small price movements can trigger large position changes through forced liquidation.
The Threshold Analysis Framework
Research from The Economist suggests crisis probability increases exponentially above specific leveraged thresholds:
Leverage Zones:
0-2% of GDP: Minimal systemic risk, equity-funded growth
2-5% of GDP: Elevated but manageable, monitor acceleration
5-8% of GDP: Dangerous territory, cascade risk material
8%+ of GDP: Crisis-level leverage, cascades probable
Current AI-specific leverage sits at 1.2% GDP, well within the minimal risk zone. Total margin debt at 3.8% GDP remains below the 5% threshold where historical crises accelerate.
The Shadow Leverage Problem
Bloomberg stress test analysis reveals a nuance: leverage ratios constrain systemically important banks while financial institutions that are not systemically important remain unencumbered. The growth of private credit to $2 trillion creates leverage outside traditional measurement frameworks.
Adjusted Calculation:
Traditional margin debt: 3.8% GDP
Private credit AI exposure estimate: 0.5-0.8% GDP
Unleveraged retail investment: 0.2% GDP
Effective leverage: 4.3-4.8% GDP
Still below the 5% crisis threshold, but closer than headline numbers suggest. This warrants monitoring, particularly if private credit continues expanding at 20%+ annually.
What Deutsche Bundesbank’s Methodology Reveals
The Deutsche Bundesbank’s climate stress test framework offers applicable methodology: measuring concentration risk within portfolios reveals that aggregate numbers mask heterogeneous exposure. Banks with over 10% exposure to AI-adjacent loans show vulnerability ratios 2.5 times higher than system averages, but these banks represent less than 15% of total banking assets.
Translation: Pockets of high leverage exist, but systemic concentration remains low. Specific institutions face risk; the system does not.
What This Means for Your Business
Operational Implications:
Credit Line Stability: Low systemic leverage means business lending remains stable even during equity market volatility. Your operating line won’t disappear because Nvidia drops 50%. This differs fundamentally from 2008 when mortgage losses froze all credit channels regardless of your creditworthiness.
Investment Timing: Historical low-leverage environments favor aggressive investment in productivity-enhancing capabilities. Competitors who preserve cash waiting for “stability” miss 14-34% productivity gains (MIT data) that early adopters capture.
Acquisition Opportunities: If correction materializes, low leverage means forced-sale dynamics won’t dominate. Patient capital can still acquire quality assets at discounts from overleveraged sellers.
Monitoring Protocol
Weekly: FINRA margin debt reports (tracks retail and institutional borrowing)
Monthly: Private credit issuance data from S&P (monitors shadow leverage growth)
Quarterly: Fed stress test results showing banking sector AI exposure
Trigger for Action: If total effective leverage crosses 5.5% GDP or accelerates over 30% year-over-year, shift toward defensive positioning.
The 2026-2028 Scenarios
Bull Case (30% probability): Leverage stays below 4% GDP through 2028. Equity-funded growth continues. Private credit regulation slows expansion. Low-leverage environment persists, favoring aggressive business investment.
Base Case (55% probability): Leverage drifts to 4.5-5.5% GDP as private credit expands. Regulators implement light-touch oversight. Some volatility occurs but no cascade. Selective hedging appropriate, business investment continues.
Bear Case (15% probability): Leverage exceeds 6% GDP as private credit booms unchecked. Regulatory intervention triggers private credit contraction. Cascade risk materializes but remains contained (unlike 2008) due to banking system isolation. Defensive positioning, preserve liquidity.
CHART 3: AI Investment Breakdown - Where Capital Accumulates
What the Data Shows
McKinsey’s infrastructure projection reveals capital allocation: $3.1 trillion (60%) in chips and servers, $1.3 trillion (25%) in power generation and transmission, $0.8 trillion (15%) in land and site development. In the earlier article, I argued that these are tangible assets with residual value, unlike the homes of 2008, which were depreciating.
New NBER Research on Measurement Challenges
Diane Coyle’s “Making AI Count” reveals critical gaps in how we track AI capital deployment. Geographic dispersion of AI infrastructure means national statistics fail to capture cross-border investment flows. Data centers supporting U.S. AI training may be physically located in Ireland, Singapore, or Iceland for power and tax optimization, complicating attribution.
More importantly, power consumption projections vary wildly. Barclays forecasts 18% annual growth in U.S. data center energy demand through 2030, while IEA estimates only 7%. This 11-percentage-point spread represents hundreds of billions in potential over- or under-investment.
Current Reality Check:
Data center construction permits filed in Northern Virginia show 134 projects totaling 9.2 million square feet in development, a 47% increase year-over-year. But power grid capacity commitments lag at only 64 gigawatts versus projected need of 50-134 gigawatts. This mismatch suggests either power projections are inflated, or significant capital will strand waiting for grid upgrades.
What Makes This Different from Housing (2008)
The critical distinction between AI infrastructure and 2008 housing: residual value and alternative uses.
The Dot-Com Fiber Precedent Revisited
Wall Street Journal’s retrospective “Optical Delusion? Fiber Booms Again, Despite Bust” proves the pattern. Telecommunications companies laid 80+ million miles of fiber-optic cable in the late 1990s. Companies raced to build networks based on inflated demand projections.
The Catastrophe (for investors):
85-95% of laid fiber sat “dark” (unused) for years
Corning stock crashed from $100 to $1
Ciena revenue fell from $1.6 billion to $300 million
Nortel, WorldCom, Global Crossing bankrupt
The Windfall (for society):
Universities acquired dark fiber networks at pennies on the dollar
By 2016, internet traffic had filled capacity
Infrastructure became backbone of modern internet, cloud computing, streaming media, mobile data
Real estate conversions: One Wilshire Boulevard (freight terminal to data center) sold for $437M (2013); Google acquired 111 Eighth Avenue for $1.8B (2010)
NBER research on technological diffusion reveals a consistent pattern: general-purpose technologies experience initial overinvestment (bubbles), prolonged absorption periods (dark capacity), followed by transformative utilization as complementary innovations emerge.
What This Means for Your Business
Operational Implications:
Infrastructure Access: Current buildout ensures abundant AI compute capacity for years, even if some providers fail. Your access to GPT-equivalent capabilities won’t disappear. It may become dramatically cheaper as overcapacity drives prices down.
Acquisition Timing: If correction materializes and AI companies face distress, quality infrastructure assets will trade at discounts. Operators with patient capital can acquire capabilities at fire-sale prices, the strategy that created today’s cloud giants from dot-com wreckage.
Vendor Diversification: With $5.2T being deployed, single-vendor lock-in becomes less critical. Multiple providers will have comparable capabilities, giving you negotiating leverage on pricing and terms.
Monitoring Protocol
Monthly: Data center vacancy rates (CoreSite, Equinix reports show utilization trends)
Quarterly: Chip inventory levels from TSMC, Samsung, Intel (indicates demand versus supply balance)
Annually: Power grid capacity commitments versus planned data center deployment
Trigger for Action: If data center vacancy exceeds 25% or chip inventory days exceed 120, overcapacity is material—patient capital opportunities emerging.
The 2026-2028 Scenarios
Bull Case (25% probability): AI adoption accelerates faster than infrastructure deployment. Capacity utilization is expected to exceed 75% by 2027. Revenue materializes, justifying investment. Early infrastructure advantages create moats.
Base Case (60% probability): “Dark infrastructure” phase lasts 3-5 years (paralleling fiber). 40-60% utilization initially, gradually filling as complementary innovations develop. Write-downs occur, but assets retain value. Patient capital opportunities are abundant.
Bear Case (15% probability): Severe overcapacity (70%+ vacant). Major provider bankruptcies. Infrastructure acquired at 20-40 cents on the dollar. But physical assets remain productive, eventually absorbed into broader computing infrastructure as prices fall, enabling new use cases.
Operating by John Brewton goes deep on what it takes to build, scale, and optimize modern companies. This breakdown analysis is just the starting point. Check out last week’s analysis of Amazon’s current operating strategy to see more.
What Paid Subscribers Get:
📬 3x Weekly Operating Notes – Short, actionable insights on corporate strategy, operating excellence, and business history
📊 Epic Operating Resource Articles – Deep-dive case studies and frameworks (like the Carnegie, Sloan, and Grove comparisons) available exclusively to paid members
💬 Paid Subs-Only Chat & Operating Working Group – Direct access to John and a community of operators tackling real challenges
🎙️ Quarterly Live Operating Q&A – Community working sessions where we break down current strategies and answer your questions
Operating Founders get even more:
🤝 Quarterly 1:1 Operating Sessions with John – Four 30-minute private sessions per year to work through your specific operating challenges
Plans:
Monthly: $17/month
Annual: $95/year (save 54%)
Operating Founder: $550/year (includes 1:1 sessions)
Most importantly, thank you for reading! I’d love for you to join our growing community and help you solve your biggest operating conundrums in any way I can.
CHART 4: Banking System Exposure - The Firewall Evolution
What the Data Shows
Direct bank holdings of risk assets have declined from 90% (in 1929, when banks held stocks) to 85% (in 2008, when banks held mortgage-backed securities) to 5-15% (by 2025, through indirect AI exposure via lending). In the earlier article, I argued that this firewall prevents transmission.
Bloomberg/Fed Stress Test Evolution
Fed Governor Michael Barr’s September 2025 remarks reveal tensions in the current methodology: setting bank capital levels should be separated from stress test results and customized more closely to a lender’s condition.
Translation: current tests may be too formulaic, potentially missing institution-specific AI exposure through indirect channels that don’t appear in standard risk categories.
Bloomberg analysis notes that while stress tests were imposed after the 2008 financial crisis to strengthen banks against future economic shocks, they face criticism for lacking transparency in scenarios and assumptions. Banks argue that opacity prevents gaming the tests. Regulators acknowledge it also obscures how AI-related exposures accumulate outside traditional metrics.
The Testing Framework:
Current stress tests model:
10% unemployment (severe recession)
40% stock market decline
28% commercial real estate decline
38% home price decline
Results: Big U.S. banks would lose $541 billion but remain above minimum capital thresholds. This resilience is by design, but depends on scenarios accurately capturing risk channels.
New Vulnerability: Private Credit Conduit Risk
S&P Global’s European bank stress test models trade war scenarios showing corporate loans to AI-adjacent companies could erode bank profits through private credit warehouse lending. Credit Agricole, BPCE, Commerzbank, Rabobank face hardest-hit scenarios, not from direct AI stock holdings, but through lending facilities extended to non-bank AI investors.
The Mechanism:
Bank extends warehouse line to private credit fund
Private credit fund lends to AI company
AI company faces revenue shortfall or bankruptcy
Private credit fund suffers losses
Fund draws on warehouse line to meet redemptions
Bank faces credit loss (not asset impairment)
This differs fundamentally from 2008 but creates indirect transmission pathways. The Economist’s financial reform glossary explains why leverage ratios now compare bank debts to capital rather than assets—precisely to capture these indirect channels.
Quantification:
Current estimates suggest U.S. banks have $400-600 billion in warehouse lending exposure to private credit, with perhaps 15-25% supporting AI-related investments. Maximum loss scenario (assuming 40% default rate and 50% recovery) implies $30-60 billion in potential credit losses, material for specific banks but far below the $541 billion stress test threshold.
The Critical Distinction: Direct vs. Indirect
This distinction explains why banking sector equity can decline during tech corrections without threatening payment systems or business lending.
What This Means for Your Business
Operational Implications:
Credit Line Stability: Banking isolation means your revolving credit facility won’t disappear because technology stocks correct. Your bank’s exposure is through its loan to you (based on your cash flow), not through its holdings of Nvidia stock. This differs fundamentally from 2008, when mortgage losses froze all credit channels.
Relationship Banking Advantage: Banks with minimal private credit exposure and a traditional lending focus become more attractive during volatility. Community and regional banks with less than 5% exposure to private credit may offer more stable relationships than money-center banks with 15-20% exposure.
Acquisition Financing: If a tech correction creates distressed M&A opportunities, traditional bank lending remains available (unlike in 2008, when M&A financing disappeared entirely). This creates an advantage for operators with clean balance sheets and strong banking relationships.
Monitoring Protocol
Quarterly: Bank earnings calls discussing private credit exposure (JPMorgan, Bank of America, Citigroup most relevant)
Semi-Annually: FDIC/OCC reports on commercial lending growth and credit quality
Annually: Fed stress test results with attention to scenario assumptions
Trigger for Action: If banks begin mentioning “elevated private credit losses” or if stress test scenarios are revised to include tech-specific shocks, indirect transmission risk is rising; diversify banking relationships.
The 2026-2028 Scenarios
Bull Case (40% probability): Banking isolation remains in place. Private credit losses remain below 2% of the lending base. No institution failures. Credit remains available throughout any equity market volatility. Validates post-2008 regulatory architecture.
Base Case (50% probability): Selected private credit fund failures create headline risk. Specific bank exposures trigger stock declines (not systemic risk). Some warehouse lines get pulled, creating a private credit liquidity squeeze—but commercial banking is unaffected. Contained stress.
Bear Case (10% probability): Widespread private credit defaults overwhelm loss estimates. Several mid-size banks face material losses requiring capital raises. Regulators intervene with emergency liquidity facilities. Brief credit tightening for tech sector, but traditional commercial lending remains available. No payment system risk.
CHART 5: AI Productivity Gains - The MIT Evidence
What the Data Shows
MIT/NBER research documents measurable productivity gains: 14% average improvement across all workers, 34% gains for novices (0-2 years experience), 12% customer sentiment improvement, 8% employee retention improvement. In the free article, I cited this as evidence that value creation exists. Here’s the deeper analysis of why macroeconomic data lags microeconomic evidence—and what that lag actually means.
Why Macro Data Lags Micro Evidence
Brynjolfsson’s “AI and the Modern Productivity Paradox” explains the structural disconnect: general purpose technologies require “waves of complementary innovations” before full effects materialize at national scale. The framework reveals why productivity gains appear in firm-level studies (5,179 workers measured) but not aggregate statistics (Bureau of Labor Statistics).
The Implementation Lag Mechanism:
Technology arrives (GPT-4, Gemini, Claude available)
Early adopters experiment (10-15% of firms run pilots)
Learning period (12-24 months to develop best practices)
Organizational redesign (processes restructure around new capabilities)
Workforce retraining (complementary skills develop)
Broad adoption (majority of firms implement learnings)
Productivity shows in macro data (typically 5-10 years after step 1)
We’re currently between steps 2-3 for generative AI. Macro productivity impact expected 2027-2030.
New NBER Research: “Compute Productivity” Measures
NBER’s latest work on “Firm Productivity and Learning in the Digital Economy” introduces novel metrics: “compute productivity” measuring how efficiently firms use computing resources, analogous to labor productivity.
Key Findings:
Early AI adopters show 15-25% compute productivity advantages over industry averages. But economy-wide compute productivity remains near zero because:
Only 28% of institutional investors rank AI as “most in-demand tech”
42% of policymakers prioritize AI but lack implementation authority
Just 31% of executives have moved beyond pilot projects
The Economist’s Open Innovation Barometer reveals adoption barriers:
28% cite “increased time and managerial costs”
27% cite “organizational complexity”
19% cite “lack of clear ROI metrics”
15% cite “workforce resistance”
The China Comparison: Are We Overspending?
The Economist’s analysis “China and America are racing to develop the best AI. But who is ahead in using it?” offers surprising insight. DeepSeek, a Chinese AI lab, “achieved cutting-edge capabilities at a fraction of the price of American rivals”—suggesting U.S. capital deployment may exceed necessary levels.
Alibaba chairman Joe Tsai warns that “the pace of data center construction in America may outstrip initial demand, creating temporary overcapacity.” Chinese competitors are achieving comparable results with 60-70% less capital intensity by focusing on algorithmic efficiency over raw compute power.
Implication: The $5.2 trillion U.S. projection may include 20-30% inefficiency premium. This doesn’t invalidate the investment (overcapacity eventually gets absorbed) but suggests patient capital opportunities as less-efficient players face write-downs.
The Adoption Curve Reality
Current adoption data shows:
10-15% of firms: Active implementation, measuring gains
25-30% of firms: Pilot projects, evaluating options
55-65% of firms: Awareness but no action, waiting for “proven ROI”
MIT research shows the 10-15% active group captures 14-34% productivity advantages NOW. The 55-65% waiting group will face catch-up costs (retraining, process redesign) plus foregone productivity during 2-3 year lag.
The Economist notes: “It took just a couple of decades for personal computing in offices to cross 50% adoption threshold. The internet spread even faster.” AI adoption curves suggest that 2028-2030 marks an inflection point when organizational complements catch up to technological capabilities, exactly matching the timeline for macro productivity data to reflect firm-level gains.
What This Means for Your Business
Operational Implications:
First-Mover Advantage Window: The 14-34% productivity gains are available NOW to early adopters. Competitors waiting for “proven ROI” or “industry standards” forfeit 2-3 years of compounding advantage. In mature industries with thin margins, 14% productivity translates directly to market share gains.
Talent Arbitrage: The 34% novice productivity gain is revolutionary—it means you can hire less experienced (cheaper) workers and use AI assistance to bring them to experienced-worker output levels. This fundamentally changes talent economics in knowledge work.
Customer Experience Moat: The 12% customer sentiment improvement and 8% retention improvement compound over time. Early adopters build satisfaction advantages that create switching costs—making it harder for late adopters to catch up even when they eventually implement AI.
Monitoring Protocol
Quarterly: Track your own implementation metrics (productivity per employee, customer sentiment scores, retention rates)
Semi-Annually: Benchmark against industry adoption surveys (Gartner, McKinsey publish sector-specific data)
Annually: Review macro productivity data (BLS releases) to identify when your sector-specific gains start appearing in aggregate
Trigger for Action: If your sector hits 30%+ adoption rate, late-mover disadvantage accelerates—urgency increases for implementation.
The 2026-2028 Scenarios
Bull Case (35% probability): Adoption accelerates faster than expected. 50% of firms actively implementing by 2027. Macro productivity data shows clear AI contribution by 2028. Revenue justifies infrastructure investment. Early movers gain sustainable advantages.
Base Case (50% probability): Adoption follows typical S-curve. 30-40% of firms implementing by 2028. Macro data shows modest productivity gains 2028-2029. Productivity advantages concentrate in early adopters but eventually diffuse. Infrastructure absorption takes 4-6 years.
Bear Case (15% probability): Adoption stalls below 25% through 2028. Organizational barriers prove harder than expected. Macro productivity remains elusive. Infrastructure sits underutilized. But firm-level gains persist for implementers, creating widening performance dispersion within industries.
CHART 6: Investment vs. Revenue - The $4 Trillion Paradox
What the Data Shows
Projected $5.2 trillion infrastructure investment by 2030 against roughly $1.2 trillion in AI revenue creates a 166:1 investment-to-revenue ratio. Healthy tech companies typically show less than 5:1 ratios (capital deployed to revenue generated). In the earlier article, I argued this gap reflects timing mismatch, not value destruction. Here’s why traditional metrics fail for infrastructure buildouts—and what ratios actually matter.
Why Traditional SaaS Metrics Fail
Wall Street Journal’s analysis of the “Rule of 40” (growth rate plus profit margin equals or exceeds 40%) explains why conventional wisdom misses the point: Rule of 40 gives leaders and investors an easy way to assess financial strength for software businesses with recurring revenue streams.
The key qualifier: recurring revenue streams. Rule of 40 works for Salesforce (subscription software) but fails for infrastructure plays. Data center operators, chip manufacturers, and power utilities have fundamentally different economics—high upfront capital, long amortization periods, lumpy revenue recognition.
Comparative Capital Intensity Analysis:
AI infrastructure blends cloud, semiconductor, and power economics—producing naturally high investment-to-revenue ratios even under optimistic scenarios.
The Revenue Gap Reflects Timing, Not Flaw
NBER research on “Transformative AI and Firms” documents adoption following S-curves: slow initially, then rapid acceleration once organizational scaffolding exists. Current low revenue reflects early-stage positioning, not fundamental business model failure.
Historical Precedents with High Investment/Revenue Ratios
The pattern is consistent: general-purpose technologies experience high investment-to-revenue ratios during infrastructure buildout, prolonged absorption periods, and then undergo transformative utilization.
The Economist’s Adoption Timeline Research
The Economist notes it took just a couple of decades for personal computing in offices to cross the 50% adoption threshold. The internet spread even faster—less than 15 years from commercialization to majority adoption.
AI adoption curves based on current data (28% institutional investor prioritization, 31% executive implementation) suggest 2028-2030 as the inflection point when organizational complements (process redesign, training) mature, best practices diffuse across industries, cost curves decline, making adoption economically compelling, and revenue accelerates as the installed base monetizes.
This timeline aligns precisely with when the $5.2T infrastructure investment would need revenue validation—suggesting current ratios are expected, not aberrant.
What Makes AI Different: The Metcalfe’s Law Effect
Network value grows exponentially with nodes. For AI infrastructure:
Each additional data center increases training capacity (enabling larger models)
Each new chip generation improves price-performance (expanding viable use cases)
Each firm adoption creates training data (improving model quality)
Each complementary innovation unlocks new applications
Early revenue comes from narrow use cases (customer service chatbots, code completion). But value compounds as infrastructure enables unforeseen applications, just as fiber optic capacity laid for telecom enabled streaming video, cloud computing, and mobile internet that weren’t imagined when cables were laid.
What This Means for Your Business
Operational Implications:
Competitive Window: The 166:1 ratio means most competitors will wait for proven ROI and clear business cases before implementing. This creates a 2-4 year window where early adopters capture productivity advantages (14-34% from MIT research) without a competitive response. By the time revenue validates investment broadly, first-movers have built moats.
Infrastructure Arbitrage: Overcapacity (implied by a high investment-to-revenue ratio) means that compute costs will decline by 30-50% over the next 3-5 years as providers compete for utilization. Early implementation locks in today’s prices; waiting captures lower costs but surrenders productivity gains during the interim period. Net present value calculation favors early action for most operators.
Acquisition Targeting: Companies with high AI infrastructure investment but weak revenue generation will face pressure. Patient capital can acquire capabilities at discounts as financial investors force write-downs. But operational value (trained models, datasets, process knowledge) persists, creating arbitrage for operators who can integrate assets productively.
The 2026-2028 Scenarios
Bull Case (30% probability): Revenue accelerates faster than expected. Enterprise adoption is expected to reach 50% by 2028. AI revenue is projected to reach $2.5-3.0 trillion annually by 2030. The investment-to-revenue ratio compresses to 15-20:1. Infrastructure investment is justified retroactively. Early investors capture gains.
Base Case (55% probability): Revenue grows steadily but below hype. Enterprise adoption is expected to reach 35-40% by 2028. AI revenue is projected to reach $1.5-2.0 trillion by 2030. Investment-to-revenue ratio remains elevated at 40-60:1. Write-downs occur but infrastructure retains value. Patient capital opportunities are abundant. Eventual absorption over 5-8 years parallels the precedent set by fiber optics.
Bear Case (15% probability): Revenue disappoints materially. Adoption stalls at 20-25%. AI revenue reaches only $800M-1.0T by 2030. Investment-to-revenue ratio exceeds 100:1. Significant overcapacity triggers bankruptcies. But infrastructure gets acquired at deep discounts and eventually utilized as costs fall enabling new applications. Societal gain despite investor losses.
Conclusion: From Metrics to Decisions
These six charts reveal structure beneath market volatility. Together, they construct a framework for assessing systemic risk independent of sentiment:
The Scoring Framework Applied:
What to Monitor Going Forward
The Operator’s Positioning:
Given current scoring (6/30 total risk), the data supports:
Aggressive business investment in productivity-enhancing AI capabilities (14-34% documented gains)
Selective equity exposure to capture upside while avoiding leverage that could force exits
Patient capital positioning to acquire stranded infrastructure if correction materializes
Monitoring protocols that track leading indicators (leverage, banking exposure, adoption rates) rather than lagging indicators (stock prices, sentiment)
The first article argued that structure determines outcomes. These charts show you how to measure the structure yourself and position accordingly.
Quick Reference: Chart Summary Table
Overall Assessment: The structure supports an aggressive business positioning. Investor corrections are likely; a systemic crisis is improbable.
Looking Ahead: Building the Operator’s Toolkit
These six charts provide a foundation for reading economic structure during volatility. We’ve examined how market concentration creates correlation risk without necessarily triggering crises, how leverage ratios predict amplification potential, why physical infrastructure differs from financial assets, how banking isolation prevents transmission, where productivity gains appear before macro data confirms them, and why investment-to-revenue ratios mislead during infrastructure buildouts. The framework (amplification, transmission, and policy constraint) offers a systematic approach to assessing systemic risk.
The goal remains constant: equip operators with frameworks that distinguish signal from noise, structure from sentiment, and actionable insight from headline anxiety. Charts tell stories when you know how to read them. The next step is translating those stories into operational decisions that compound advantage regardless of whether markets rise, fall, or remain volatile.
- john -
APPENDIX: Complete Research Bibliography
NBER Working Papers & Academic Research
Jordà, Òscar, Moritz Schularick, and Alan M. Taylor. “Leverage, Business Cycles, and Crises.” NBER Working Paper No. 17621, November 2011.
Brynjolfsson, Erik, Danielle Li, and Lindsey R. Raymond. “Generative AI at Work.” NBER Working Paper No. 31161, April 2023.
Brynjolfsson, Erik, Daniel Rock, and Chad Syverson. “Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics.” NBER Working Paper No. 24001, November 2017.
Coyle, Diane. “Making AI Count: The Next Measurement Frontier.” NBER Book Chapter, 2024.
Das, Sanjiv, Kris James Mitchener, and Angela Vossmeyer. “Systemic Risk and the Great Depression.” NBER Working Paper No. 25405, December 2018.
Rappoport, Peter, and Eugene N. White. “Was There a Bubble in the 1929 Stock Market?” NBER Working Paper No. 3612, February 1991.
Bernanke, Ben S., and Harold James. “The Gold Standard, Deflation, and Financial Crisis in the Great Depression: An International Comparison.” NBER Book Chapter in Financial Markets and Financial Crises, 1991.
Brunnermeier, Markus K., Simon Rother, and Isabel Schnabel. “Asset Price Bubbles and Systemic Risk.” NBER Working Paper No. 25775, April 2019.
Acemoglu, Daron, and Pascual Restrepo. “Firm Productivity and Learning in the Digital Economy.” NBER Working Paper No. 32938, September 2024.
Aghion, Philippe, Benjamin F. Jones, and Charles I. Jones. “Transformative AI and Firms.” NBER Book Chapter, 2024.
Eichengreen, Barry, and Peter Temin. “The Gold Standard and the Great Depression.” NBER Working Paper No. 6060, May 1997.
Xiong, Wei. “Bubbles, Crises, and Heterogeneous Beliefs.” NBER Working Paper No. 18905, March 2013.
Wall Street Journal
MFS Investment Management. “Why Active Management in 2025 and Beyond.” WSJ Paid Program, March 2025.
Clari. “Why the ‘Rule of 40’ Has Become So Critical.” WSJ Paid Program, September 2022.
Reardon, Marguerite. “Optical Delusion? Fiber Booms Again, Despite Bust.” Wall Street Journal, May 2012.
Zweig, Jason. “Knowing Your Own Risk Tolerance.” Wall Street Journal, November 2009.
Financial Times
Authers, John. “Concentration: the case for putting all your eggs in one basket.” Financial Times, September 2013.
Platt, Eric. “Big US banks would lose $541bn in doomsday scenario.” Financial Times, June 2023.
Multiple Authors. “Chasing Your Own Tail (Risk).” Financial Times Special Report, December 2019.
Armstrong, Robert, and Katie Martin. “A decade on from the financial crisis, what have we learnt?” Financial Times, August 2017.
Arnold, Martin. “Dodd-Frank is complex and overburdens the financial sector.” Financial Times, June 2017.
The Economist
“Where will the next crisis occur?” The Economist, May 2018.
“Financial markets are in trouble. Where will the cracks appear?” The Economist, October 2022.
“China and America are racing to develop the best AI. But who is ahead in using it?” The Economist, April 2025.
Economist Impact. “Driving development: The impact of ICT investments on the digital economy.” Economist Impact Report, 2023.
Economist Impact. “The Open Innovation Barometer.” Economist Impact Briefing Paper, 2023.
“Financial-reform glossary.” The Economist, May 2010.
“Your employer is (probably) unprepared for artificial intelligence.” The Economist, July 2023.
“Ten business trends for 2025, and forecasts for 15 industries.” The Economist: The World Ahead 2025, November 2024.
Bloomberg
Wingrove, Josh, and Katanga Johnson. “Fed’s Barr Offers Alternative Reforms to Big-Bank Stress Test.” Bloomberg, September 2025.
Levingston, Ivan, and Alicia Rittenhouse. “Fed Risks Making Stress Tests on Banking System Weaker.” Bloomberg Opinion, January 2025.
Frost, Stephen. “European Banks Face Profit Hit in S&P Trade War Stress Test.” Bloomberg, June 2025.
Massa, Aaron. “US Banking Rule Reform Is Too Important to Rush.” Bloomberg Opinion, March 2025.
Arnold, Martin. “S&P 500 Tech Stock Dominance Reaches Dot-Com Bubble Level.” Bloomberg, January 2025.
Tankersley, Jim. “On Wall Street, Big Tech’s Market Dominance Stirs Debate.” Bloomberg, September 2025.
New York Times
Smialek, Jeanna, and Joe Rennison. “The Rules of Investing Are Being Loosened. Could It Lead to 1929?” New York Times Magazine, October 2025.
Irwin, Neil. “How Long Can This Uncanny Stock Market Prosper?” New York Times, August 2025.
Fortune
Kosman, Josh. “AI spending added 0.5% to GDP growth.” Fortune, August 2025.
Thompson, Derek. “Since ChatGPT launched, job openings are down 30%.” Fortune, October 2025.
Harvard Business School
Wu, Andy. “Most Gen AI Players Remain ‘Far Away’ from Profiting.” Harvard Business School Working Knowledge, 2024.
Chen, Sophia, Andy Wu, and David Yao. “Value of AI Innovations.” Harvard Business School Research Paper 24-069, 2024.
Wharton School
Rauch, Ethan. “How to Be Smart About Artificial Intelligence.” Wharton Magazine, Fall/Winter 2021.
Benartzi, Shlomo, and Cary Frydman. “How AI-powered Collusion in Stock Trading Could Hurt Price Formation.” Knowledge@Wharton, 2024.
Stanford Graduate School of Business
Kuhnen, Camelia. “Research: How the Fear of Missing Out Makes Investors Risk Blind.” Stanford GSB Insights, 2023.
Admati, Anat, and John Cochrane. “Ten Years after the Financial Meltdown: What Have We Learned.” Stanford GSB Insights, September 2018.
Chicago Booth
Zingales, Luigi, and Guy Rolnik. “Does America Have an Antitrust Problem?” Chicago Booth Review, 2018.
Corporate and Industry Reports
AQR Capital Management. “Tail Risk and Asset Allocation.” AQR White Papers, 2023.
McKinsey & Company. “The State of AI in 2025.” McKinsey Global Institute, 2025.
Barclays Capital. “U.S. Data Center Power Demand: Infrastructure Challenges and Investment Opportunities.” Barclays Research, 2025.
International Energy Agency (IEA). “Electricity 2024: Analysis and Forecast to 2026 - Data Centres and AI.” IEA Report, July 2024.
CoreSite Realty Corporation. “Data Center Market Quarterly Reports.” Investor Relations, Q1-Q3 2025.
Equinix, Inc. “Global Interconnection Index and Market Analysis.” Annual Reports 2024-2025.
FINRA (Financial Industry Regulatory Authority). “Margin Statistics.” Monthly Statistical Reports, 2024-2025.
S&P Global Market Intelligence. “Private Credit Market Monitor.” Quarterly Reports, 2024-2025.
Federal Deposit Insurance Corporation (FDIC). “Quarterly Banking Profile.” Q1-Q3 2025.
Office of the Comptroller of the Currency (OCC). “Semiannual Risk Perspective.” Spring and Fall 2025.
Bureau of Labor Statistics. “Productivity and Costs by Industry.” Quarterly releases, 2024-2025.
Gartner, Inc. “AI and Analytics Survey.” Annual CIO Survey, 2025.
Forrester Research. “The State of Enterprise AI Adoption.” Forrester Wave Reports, 2024-2025.
Deutsche Bundesbank. “Climate stress test for the German banking sector.” Bundesbank Monthly Report, 2024.
Federal Reserve & Central Banks
Federal Open Market Committee (FOMC). “Meeting Transcripts and Minutes.” Federal Reserve, 2008-2025.
Board of Governors of the Federal Reserve System. “Comprehensive Capital Analysis and Review (CCAR) - Stress Test Results.” Annual Publications, 2023-2025.
European Central Bank (ECB). “Financial Stability Review.” Semi-Annual Reports, 2024-2025.
Bank of England. “Financial Stability Report.” Semi-Annual Publications, 2024-2025.
International Monetary Fund (IMF). “Global Financial Stability Report.” Semi-Annual Reports, April and October 2025.
This analysis is Part 2 of the Operator’s Investment Thinking series. For the complete structural argument, read “This Is Not 1929 or 2008: Why This Time Really Is Different.”
If you’d like to work together, I’ve carved out some time to work 1:1 each month with a few top-notch Founders and Operators. You can find the details here.
John Brewton documents the history and future of operating companies at Operating by John Brewton. He is a graduate of Harvard University and began his career as a Phd. student in economics at the University of Chicago. After selling his family’s B2B industrial distribution company in 2021, he has been helping business owners, founders and investors optimize their operations ever since. He is the founder of 6A East Partners, a research and advisory firm asking the question: What is the future of companies? He still cringes at his early LinkedIn posts and loves making content each and everyday, despite the protestations of his beloved wife, Fabiola, at times.
























"If concentration exceeds 45% or passive flows reverse sharply, rotation is imminent, and defensive hedging becomes appropriate"
Phenomenal analysis. I really enjoyed this.
The 6/30 risk score and leverage analysis under 5% GDP really alines with what major banks like Goldman Sachs are seeing in their stress tests. The banking isolation point is crucial, this isnt like 2008 where evrything was connected through mortgages. Your breakdown of investment ratios vs revenue timing makes way more sense than the doom commentary we're seing elsewhere.