Minbook
KO
2027 AI Market Scenarios — Where Does the $660B Go?

2027 AI Market Scenarios — Where Does the $660B Go?

MJ · · 6 min read

Three-scenario analysis (Bull/Base/Bear) of the AI market's $660B CapEx and survival strategies by segment

The $660B Question

The Top 5 hyperscalers’ 2026 CapEx outlook stands at $660B–$690B. This money is already being deployed. GPUs have been ordered, data centers are under construction, and power contracts have been signed. There is no turning back.

The question that remains open is the path to return on this investment. As we explored in Part 1, the ratio between infrastructure investment ($660B) and application revenue ($80B) currently sits at roughly 8:1. Whether this gap closes quickly or repeats the dot-com era pattern is the central question.

In this post, we analyze the AI market through 2027 across three scenarios and present the impact by segment and survival strategies for each.


Scenario Design Framework

Two key variables determine the scenarios:

  1. Speed of enterprise AI demand realization — 72% of CIOs have yet to see AI ROI (Gartner 2025). How quickly does this ratio improve?
  2. Inference cost decline vs. usage growth — Inference costs have fallen 280x over 24 months. How fast does usage growth offset continued cost declines?

2027 AI Market Scenarios


Bull Scenario: AI Delivers on Its Promises (Probability 25%)

Prerequisites

  • AI agents reach a level of genuinely autonomous task execution (Level 3+ Autonomy)
  • Enterprise AI adoption rises above 60% by 2027
  • AI-generated code/content approaches human-level quality
  • Inference costs drop another 100x, unlocking new use cases

Market Size Projections

Segment2026E2027E (Bull)Growth Drivers
Infrastructure$500B+$700B+Sustained excess demand, further CapEx expansion
Platform$18B+$45B+Explosion in agent API demand
Application$80B+$200B+Mass enterprise adoption

What Happens in the Bull Case

Infrastructure: NVIDIA’s dominance weakens slightly (75% to 70%), but absolute revenue grows to $150B+. HBM demand exceeds supply by 2x. Custom ASICs capture 30%+ of the inference market.

Platform: AI agent frameworks (LangChain, CrewAI, etc.) become enterprise standards. The model API market consolidates into a three-way race among OpenAI/Anthropic/Google. Inference cost declines cause usage-based revenue to plateau, but companies pivot to agent orchestration billing models.

Apps: AI SaaS GRR improves to 60%+. Deeper workflow integration raises switching costs. The distinction between “AI-native” and “AI-enhanced” companies becomes meaningless as all software embeds AI.

Historical analogy for the Bull scenario: The cloud transition of the 2010s. Early resistance — “security concerns,” “on-prem is cheaper” — gave way to AWS generating $90B+ annually. AI could follow a similar trajectory.


Base Scenario: Selective Success, Broad Correction (Probability 50%)

Prerequisites

  • AI delivers clear ROI in specific domains (coding, customer service, data analytics)
  • But general-purpose AI agents fall short of expectations
  • CapEx growth rate decelerates starting 2027 (dropping to 10–15% annual increases)
  • Some AI SaaS companies fail at scale, triggering restructuring

Market Size Projections

Segment2026E2027E (Base)Growth Drivers
Infrastructure$500B+$550B+Slowing CapEx growth, inference efficiency gains
Platform$18B+$30B+Specialized demand in coding and customer service
Application$80B+$130B+Selective enterprise adoption

What Happens in the Base Case

Infrastructure: Hyperscalers revise 2027 CapEx guidance downward. “We planned to spend $700B but are pulling back to $600B.” This announcement alone could send NVIDIA’s stock down 20–30%. Yet absolute investment remains historically massive, and GPU demand continues.

Platform: ARR growth at OpenAI and Anthropic decelerates — from 3–4x annual growth to 1.5–2x. Some AI startups (Jasper, Writesonic types) get acquired or shut down. The survivors are domain-specialized + workflow-integrated companies.

Apps: Sweeping restructuring across AI SaaS. As VC funding becomes selective, AI wrapper companies with low GRR fail en masse. The term “AI winter” resurfaces, but the reality is closer to an “AI autumn” — overheated expectations adjusting to realistic levels.

Base Scenario Timeline

Historical analogy for the Base scenario: The 2001–2003 dot-com correction. Amazon’s stock dropped 90% but survived to become a $1T+ company. Pets.com vanished. AI could undergo the same selection process.


Bear Scenario: The AI Bubble Bursts (Probability 25%)

Prerequisites

  • Enterprise AI ROI shows no improvement by 2027 (70%+ of companies still below breakeven)
  • Performance improvements in large AI models plateau (scaling laws hit their limits)
  • Geopolitical risks materialize (tighter US-China AI chip export controls, TSMC risk)
  • Surging energy costs drive up data center operating expenses

Market Size Projections

Segment2026E2027E (Bear)Growth Drivers
Infrastructure$500B+$350BDrastic CapEx cuts, GPU surplus
Platform$18B+$15BIntensifying price wars, company failures
Application$80B+$60BMass churn, “AI winter”

What Happens in the Bear Case

Infrastructure: GPU oversupply hits. Hyperscalers cancel or defer orders, causing NVIDIA’s revenue to decline quarter-over-quarter for the first time. Secondhand H100/H200 prices drop by 50% or more. Cloud GPU prices fall in tandem, which paradoxically makes AI usage significantly cheaper.

Platform: Massive cost cuts begin at major model companies — OpenAI, Anthropic, and others. With OpenAI’s annual inference costs at $10B+ and revenue growth slowing, cash burn becomes critical. Instead of an IPO, strategic acquisition (e.g., Microsoft’s full acquisition of OpenAI) emerges as a realistic scenario.

Apps: 60%+ of AI SaaS companies shut down. VC funding dries up in the AI sector, mirroring the 2022 crypto winter pattern. However, domain-specialized AI companies that have proven ROI (legal AI, coding AI, etc.) get acquired at undervalued prices and see their value recognized.

The Bear Paradox: Rebuild After Destruction

Even in the Bear scenario, AI technology itself does not disappear. The internet kept growing after the dot-com bubble burst in 2000. In fact, Amazon, Google, and Facebook were built on top of the infrastructure that excess investment had laid down.

The biggest beneficiaries in the Bear scenario are:

  • Startups leveraging cheap compute from the GPU surplus
  • Companies hiring AI talent shed by large corporations
  • Cash-rich companies that showed restraint during the overheated period (Apple, Berkshire, etc.)

Cross-Scenario Impact by Segment

SegmentBull (25%)Base (50%)Bear (25%)
InfrastructureSustained excess demandSlowing CapEx growthGPU oversupply
PlatformAgent API explosionSelective growthPrice wars + restructuring
AppsGRR 60%+, mass adoptionAI wrapper restructuring60%+ company shutdowns
NVIDIA$150B+ revenueDecelerating growthFirst-ever revenue decline
OpenAI$50B+ ARR$35B ARR, slower growthStrategic acquisition target
Korea (Infra)HBM demand surgeHBM demand sustainedHBM price decline
Korea (Apps)Niche AI export opportunitiesDomestic-focused growthGovernment subsidy dependence

Strategies That Hold Across All Scenarios

Regardless of which scenario materializes, the following strategies remain valid.

1. “Don’t Invest in AI Infrastructure — Build on Top of It”

Avoid bearing CapEx risk. Rent GPUs instead of buying them, use APIs instead of building models, leverage the cloud instead of constructing data centers. If the Bear scenario arrives, infrastructure costs plummet — and companies that avoided CapEx benefit the most.

2. “Prove GRR 70% First”

The critical KPI for AI products is not ARR growth rate but GRR. As analyzed in Part 3, achieving GRR 70%+ requires:

  • Deep workflow integration
  • Proprietary data/models
  • Value-based pricing at $250+/month

You need at least one of these three.

3. “Maintain 18 Months of Runway”

In the Base/Bear scenarios, VC funding may become selective or dry up entirely. Managing burn rate and securing at least 18 months of runway is the baseline condition for survival.

RunwayStatusResponse
Under 6 monthsCrisisImmediate cost cuts, bridge funding
6–12 monthsCautionModerate growth pace, prioritize profitability
12–18 monthsManagedSelective investment + efficiency gains
18+ monthsStablePosition to capture scenario-specific opportunities

4. “Dig a Domain Moat”

General-purpose AI becomes a red ocean of price competition. The defensible moat lies in the combination of data + workflow + regulatory expertise within specific domains (legal, healthcare, manufacturing, finance). This holds true in every scenario.

Strategies That Work in Any Scenario


Closing: Understanding the Economics of Each Segment

Throughout this series, we examined the AI market not as a single number but segment by segment. Under the same label of “AI market,” an infrastructure segment with 88% margins coexists alongside an application segment with 40% GRR. The landscape looks entirely different depending on which segment you occupy and what role you play.

What is certain is that the $660B in investment is already being deployed. What is uncertain is the path to returns. Whichever scenario plays out, the strategy of “keeping CapEx light, proving retention first, and digging moats in your domain” appears sound.


Sources

  1. Hyperscaler CapEx OutlookIEEE Communications Society (2025.12), CNBC (2026.02), Futurum Group (2026.02), Goldman Sachs (2025.09)
  2. CIO AI ROI SurveyGartner Newsroom (2025.05 survey, n=506)
  3. LLM Inference Cost TrendsStanford AI Index 2025, a16z “LLMflation”, Epoch AI
  4. AI SaaS GRR/NRRChartMogul “The AI Churn Wave” (2025.09)
  5. OpenAI/Anthropic ARRPYMNTS, SaaStr, Yahoo Finance (2026.02–03)
  6. NVIDIA FinancialsNVIDIA FY2025 10-K (SEC), Q4 Earnings Call Transcript
  7. Dot-com Bubble ComparisonScott Galloway “Prof G”, Morgan Stanley (2025)
  8. AI VC Investment TrendsOECD (2026.02), Crunchbase, BestBrokers
  9. DeepSeek MoE EfficiencyDeepSeek-V3 Technical Report (arXiv), Epoch AI
  10. Korea HBM MarketTrendForce (2025), SK hynix FY25, Samsung FY2025
Share

Related Posts