Memory Semiconductor Cycle: Micron (MU), Samsung, SK Hynix
Date: March 27, 2026
Focus: HBM supply positioning for Nvidia/AMD next-gen chips, cyclical framework, and lessons from prior memory cycles
Table of Contents
- Executive Summary
- The Business: What You're Actually Buying
- Financial Comparison: The P&L
- HBM: The Battle for Nvidia and AI Chip Supply
- Future Nvidia Roadmap and Memory Supplier Positioning
- Beyond Nvidia: AMD, Google, OpenAI, and Custom ASICs
- The Cyclical Framework: How Memory Cycles Work
- Historical Cycle Guide: Investing Through Boom and Bust
- Where Are We in This Cycle?
- Valuation: What You're Paying
- Bull and Bear Cases by Company
- How to Invest in This Cycle: A Framework
1. Executive Summary
The memory semiconductor industry is in the strongest upcycle in its history, driven by AI-related demand for High Bandwidth Memory (HBM) that has transformed what was historically a brutally cyclical commodity business into something closer to a design-win-driven, structurally advantaged oligopoly. Three companies — SK Hynix, Samsung Electronics, and Micron Technology — control ~95% of global DRAM and are the only suppliers of HBM for AI accelerators. The HBM market alone is projected to grow from ~$35B in 2025 to ~$55B in 2026 and potentially $100B by 2028.
The key question for investors: who wins the HBM allocation race for next-generation Nvidia, AMD, and custom AI chips — and how much of the cyclical playbook still applies when the demand driver is structurally different from anything the memory industry has ever seen?
My read on the three companies:
-
SK Hynix is the clear HBM leader with ~53-62% market share and ~70% of Nvidia's HBM4 orders for the Vera Rubin platform. It is the "toll road" for AI compute and commands premium margins. The risk is concentration: Nvidia is the overwhelming customer.
-
Samsung had a humiliating 18-month HBM3E qualification failure with Nvidia but has recovered aggressively. It is now mass-producing HBM4, has secured ~30% of Nvidia's HBM4 allocation, supplies 60%+ of Google's TPU HBM, and just won an exclusive 800M Gb HBM4 deal with OpenAI for its Titan chip. Samsung's integrated model (memory + foundry + packaging) is unique and underappreciated. The foundry division is still loss-making but narrowing, which depresses consolidated multiples.
-
Micron is the smallest HBM player (~18% share) but arguably the highest-quality pure-play memory stock. It just posted 69% operating margins in FQ2 2026, has the most aggressive US manufacturing expansion (backed by $6.4B in CHIPS Act funding), and is the only company reporting financials that cleanly isolate the memory business. Its HBM4 is in volume production for Nvidia Vera Rubin.
The cyclical warning: memory stocks have historically peaked 2 quarters before earnings peak, and the best time to buy has always been when P/E ratios are negative or undefined (i.e., when companies are reporting losses). At current valuations — Micron at 7.9x P/B, SK Hynix at 6.0x P/B — the stocks are priced for continued supercycle. The historical average trough-to-peak gain for Micron is ~600%; the current cycle has already delivered ~873% from the December 2022 trough. The question is whether HBM structural demand extends this cycle beyond historical patterns, or whether we're approaching the point where cycle-aware investors should be thinking about position sizing and exit timing.
2. The Business: What You're Actually Buying
The DRAM Oligopoly
The DRAM industry consolidated from ~20 meaningful producers in the 1990s to just three by 2013:
| Event | Year | Impact |
|---|---|---|
| Qimonda bankruptcy (spun from Infineon) | 2009 | German DRAM exits |
| SK Telecom acquires Hynix | 2012 | Forms SK Hynix |
| Micron acquires bankrupt Elpida (Japan) | 2013 | Japan's last DRAM maker exits |
| Result: 3-player oligopoly | 2013-present | Samsung, SK Hynix, Micron control ~95% of DRAM |
This is the most important structural fact in the industry. Before 2013, memory was a value-destroying business — companies chronically overinvested, prices crashed, and firms went bankrupt in every downcycle. Since the oligopoly formed, the Big 3 have earned through-cycle returns above the cost of capital for the first time. There is no credible fourth entrant: a new leading-edge DRAM fab costs $15-20B and takes 3-5 years to build. China's CXMT is the closest thing to a new competitor and remains limited to commodity DDR4/DDR5, years behind on HBM.
DRAM Market Share (Q3-Q4 2025, by Revenue)
| Company | Q3 2025 | Q4 2025 | Trend |
|---|---|---|---|
| Samsung | 32.6% | ~35-38% (reclaimed #1) | Recovering HBM ramp |
| SK Hynix | 33.2% | ~32-34% | HBM margin leadership |
| Micron | 25.7% | ~23-25% | Highest margin, smallest share |
| Others (Nanya, etc.) | ~8.5% | ~5-7% | Shrinking |
NAND Market Share (Q3 2025)
| Company | Share |
|---|---|
| Samsung | 32.3% (#1) |
| SK Group (Hynix + Solidigm) | 19.3-21.1% |
| Kioxia | 15.3% |
| Micron | 13.3% |
| SanDisk (spun from WD) | 12.4% |
What I like here: NAND is a five-player market (less consolidated than DRAM) but the thesis for these three names is overwhelmingly about DRAM and HBM. NAND is a supporting actor — contributing revenue and benefiting from the same supply discipline — but it's not the reason to own these stocks in 2026.
What Is HBM and Why Does It Matter?
High Bandwidth Memory (HBM) is a specialized type of DRAM that vertically stacks multiple memory dies using Through-Silicon Vias (TSVs), bonded to a logic base die. It offers dramatically higher bandwidth and capacity in a smaller footprint than standard DRAM, making it essential for AI training and inference accelerators.
Key characteristics:
- HBM3E delivers ~1.2 TB/s bandwidth per stack (vs. ~50-60 GB/s for DDR5)
- HBM4 pushes this to ~1.6+ TB/s per stack, with 13 TB/s aggregate on Nvidia Rubin
- HBM requires ~3x the wafer output per equivalent bit vs. conventional DRAM — it absorbs enormous fab capacity
- HBM commands ~4x the price per bit vs. standard DDR5
- AI is expected to consume ~20% of global DRAM wafer capacity in 2026, up from near-zero in 2022
HBM4 is structurally different from prior generations: it includes a customer-specific logic base die, making it more like a design win and less like a fungible commodity. This shifts the competitive dynamics toward deeper customer relationships and co-design capability — which favors SK Hynix (Nvidia) and Samsung (integrated foundry for base die production).
Company Profiles
Micron Technology (NASDAQ: MU)
- Pure-play memory: ~79% DRAM, ~21% NAND (FQ2 2026)
- Headquarters: Boise, Idaho
- Fabs: Idaho, Virginia, Singapore, Japan, Taiwan
- CHIPS Act: $6.4B federal funding; $200B US investment commitment over 20+ years
- Key advantage: cleanest financial reporting; highest current margins; massive US expansion
- Key risk: smallest HBM allocation among the Big 3
SK Hynix (KRX: 000660)
- DRAM + NAND, with HBM as the dominant growth driver
- Headquarters: Icheon, South Korea
- Key facilities: Icheon, Cheongju (M15X for HBM4), new Indiana advanced packaging fab
- Parent: SK Group (SK Telecom)
- Key advantage: deepest Nvidia relationship; first to mass-produce every HBM generation; ~70% of Nvidia's HBM4 orders
- Key risk: Nvidia customer concentration; Korea corporate governance discount
Samsung Electronics (KRX: 005930)
- Memory is part of the Device Solutions (DS) division, alongside a loss-making foundry/System LSI business
- Headquarters: Suwon, South Korea
- Key facilities: Pyeongtaek (P3, P4, P5 under construction), Austin, Taylor TX (foundry)
- Key advantage: only company offering memory + foundry + advanced packaging in-house; diversified customer base (Google, AMD, OpenAI); broadest product portfolio
- Key risk: foundry losses dilute memory profitability; Samsung conglomerate discount; HBM execution stumbles cost 18 months of Nvidia share
3. Financial Comparison: The P&L
Micron (FQ2 2026 = Dec-Feb Quarter, Reported March 18, 2026)
| Metric | FQ2 2025 | FQ1 2026 | FQ2 2026 | YoY Change |
|---|---|---|---|---|
| Revenue | $8.05B | $13.64B | $23.86B | +196% |
| DRAM Revenue | ~$6.1B | ~$10.8B | $18.8B | +207% |
| NAND Revenue | ~$1.9B | ~$2.8B | $5.0B | +169% |
| Gross Margin | 36.8% | ~60% | 74.9% | +3,810bps |
| Operating Income | — | — | $16.5B | — |
| Operating Margin | ~25% | ~47% | 69% | +4,400bps |
| GAAP EPS | $1.56 | $4.78 | $12.07 | +674% |
| Non-GAAP EPS | $1.79 | $4.78 | $12.20 | +582% |
| Free Cash Flow | — | $3.9B | $6.9B | +702% |
FQ3 2026 Guidance: Revenue $33.5B (+/-$750M), gross margin ~81%, EPS $19.15 (+/-$0.40). These are staggering numbers — $33.5B in a single quarter would annualize to $134B, more than double FY2025's $37.4B full year.
Revenue by Business Unit (FQ2 2026):
| Segment | Revenue | % of Total | QoQ Growth |
|---|---|---|---|
| Cloud Memory (CMBU) | $7.7B | 32% | +47% |
| Core Data Center (CDBU) | $5.7B | 24% | +139% |
| Mobile/Client (MCBU) | $7.7B | 32% | +81% |
| Auto/Embedded (AEBU) | $2.7B | 11% | +57% |
What I like: 69% operating margins. These are margins you'd expect from a software monopoly, not a memory chip company. HBM mix shift is driving this — HBM commands 4x the ASP of standard DDR5, and Micron's HBM capacity is fully sold out through CY2026 and CY2027.
What concerns me: capex is rising fast. FY2026 capex is now guided above $25B, with FY2027 "meaningfully higher" including $10B+ in construction spending. The new Idaho and New York fabs are multi-year projects (Idaho production starts 2H 2027, New York around 2030). In prior cycles, rising capex has been the leading indicator of eventual oversupply.
SK Hynix (Q4 2025, Reported January 28, 2026)
| Metric | Q4 2024 | Q3 2025 | Q4 2025 | YoY Change |
|---|---|---|---|---|
| Revenue | KRW 19.77T | KRW 24.52T | KRW 32.83T | +66% |
| Operating Profit | KRW 8.08T | KRW 11.41T | KRW 19.17T | +137% |
| Operating Margin | 40.9% | 46.5% | 58.4% | +1,750bps |
| Net Profit | KRW 8.01T | KRW 10.08T | KRW 15.25T | +90% |
Full Year 2025: Revenue KRW 97.15T (~$68B), operating profit KRW 47.21T (~$33B), 49% operating margin. Net profit KRW 42.95T (~$30B). Annual profit exceeded SK Hynix's entire 2023 revenue.
HBM contribution: HBM revenue more than doubled YoY in 2025. All DRAM, NAND, and HBM production through 2026 is sold out. HBM3E remains ~2/3 of HBM shipments in 2026, with HBM4 ramping from Q2.
Samsung (DS Division, Q4 2025)
| Metric | Q4 2024 | Q3 2025 | Q4 2025 | YoY Change |
|---|---|---|---|---|
| DS Division Revenue | KRW 28.1T | KRW 40.0T | KRW 44.0T | +57% |
| DS Division Operating Profit | KRW 2.9T | KRW 13.5T | KRW 16.4T | +466% |
| Memory Operating Profit (est.) | ~KRW 5T | ~KRW 14T | ~KRW 17T | +240% |
| Foundry/System LSI Loss | ~KRW 2T+ | ~KRW 1T | ~KRW 800B | Narrowing |
Full Year 2025: DS Division revenue KRW 130.1T, operating profit KRW 24.9T (+65% YoY). Total Samsung FY2025 revenue KRW 333.6T, operating profit KRW 43.6T.
Critical context: Samsung's DS division includes the chronically loss-making foundry/System LSI business. The memory-only operating margin likely exceeded 50% in Q4 2025, but it's diluted by foundry losses that ran ~KRW 800B in Q4 (improved from KRW 2T+ in H1 2025). When comparing Samsung to SK Hynix or Micron, you have to mentally separate the memory business from the foundry anchor. This is the single biggest reason Samsung trades at a "discount" — the market is right to apply a conglomerate discount, but may be overweighting the foundry drag relative to the HBM recovery.
Side-by-Side Margin Comparison (Latest Quarter)
| Metric | Micron (FQ2 2026) | SK Hynix (Q4 2025) | Samsung DS (Q4 2025) |
|---|---|---|---|
| Revenue | $23.86B | $22.93B | $30.77B |
| Operating Margin | 69% | 58% | ~37% (incl. foundry drag) |
| Memory-Only Op Margin (est.) | 69% | 58% | >50% |
| Gross Margin | 74.9% | ~63-67% | ~63-67% (memory only) |
My read: Micron's margin leadership is partly mix (highest HBM ASP uplift as a % of revenue for the smallest player) and partly the benefit of reporting memory-only financials. Samsung's consolidated margins make it look worse than it is. SK Hynix is in the middle — outstanding margins, but the spread to Micron reflects that SK Hynix is the volume leader and may be doing more at lower-margin conventional DRAM price points.
4. HBM: The Battle for Nvidia and AI Chip Supply
HBM Market Share Evolution
| Supplier | Q2 2025 (by shipments) | Q3 2025 (by revenue) | 2026 HBM4 Forecast (Counterpoint) |
|---|---|---|---|
| SK Hynix | 62% | 53-57% | 54% |
| Samsung | 17% | 35% | 28% |
| Micron | 21% | ~11-25% | 18% |
Samsung's surge from 17% to 35% in Q3 2025 reflects the initial impact of its HBM3E qualification with Nvidia (September 2025) and ramp of AMD MI300X/MI350 supply. The 2026 numbers reflect HBM4 specifically, where SK Hynix's first-mover advantage is most pronounced.
HBM Supply by Nvidia Chip Generation
| Nvidia Chip | Memory | SK Hynix | Samsung | Micron |
|---|---|---|---|---|
| H100 (2023) | HBM3, 80 GB | Sole supplier | — | — |
| H200 (2024) | HBM3E, 141 GB | ~70-80% | Late entrant (qual'd Sept 2025) | ~20-30% |
| B200/GB200 (2024-25) | HBM3E, 192 GB | >60% | Small allocation | Qualified |
| B300/GB300 (Jan 2026) | HBM3E, 288 GB | Primary | Small allocation (qual'd Sept 2025) | Qualified, 12-high |
| Vera Rubin (2H 2026) | HBM4, 288 GB | ~70% | ~30% | In volume production |
| Rubin Ultra (2027) | HBM4E, 1 TB | TBD | TBD (showed HBM4E at GTC 2026) | TBD |
| Feynman (2028) | HBM4E or HBM5 | TBD | TBD | TBD |
Samsung's HBM3E Qualification Debacle — and Recovery
This is one of the most consequential competitive dynamics in semiconductors over the past two years. Samsung — the world's largest memory company — failed Nvidia's HBM3E qualification repeatedly for ~18 months:
- Root cause: thermal performance / heat dissipation issues stemming from Samsung's insistence on TC-NCF (thermocompression non-conductive film) bonding technology
- Multiple failures: failed validation throughout 2024 and again in mid-2025 (TrendForce reported "stumbles again" in June 2025)
- Fix: Vice Chairman Jun Young-hyun ordered a full redesign of the DRAM core for HBM3E, addressing thermal issues via a redesign of Samsung's 1a-class DRAM
- Resolution: Samsung finally passed Nvidia's 12-high HBM3E qualification in September 2025 — approximately 18 months after development completion
- Cost: Samsung's HBM market share cratered to 17% in Q2 2025 while SK Hynix dominated at 62%
The recovery has been aggressive:
- Samsung cut HBM3E prices by ~30% to gain share (unit price from ~$300 to ~$200 per stack)
- HBM4 samples passed Nvidia's quality testing in late 2025
- HBM4 mass production began February 2026
- Samsung secured ~30% of Nvidia's HBM4 orders for Vera Rubin
- HBM sales expected to more than triple in 2026 vs. 2025 (from ~$9B to ~$27B+)
- HBM4 yield rate: ~60% (vs. SK Hynix's HBM3E yields at ~80%)
My read: Samsung's HBM debacle was a genuine execution failure, not a fundamental technology gap. The redesign fixed it, and Samsung is now on a credible trajectory to ~28-30% HBM market share by end of 2026. But the episode matters: it demonstrated that in HBM, first-mover advantage is enormous because qualification cycles are long and customers can't easily switch suppliers mid-generation. SK Hynix's 18-month head start on HBM3E translated directly into market share dominance and pricing power. Samsung is playing catch-up, and it's using price cuts to buy share — which compresses margins for everyone.
SK Hynix's Nvidia Moat
SK Hynix's position with Nvidia is the strongest customer-supplier relationship in memory semiconductors:
- Sole supplier for HBM3 (H100 — the chip that kicked off the AI boom)
- First to mass-produce HBM3E (12-high 36GB in September 2024, 6 months ahead of competitors)
- ~70% of Nvidia's HBM4 orders for Vera Rubin (UBS, confirmed by multiple reports January 2026)
- M15X fab started commercial production four months ahead of schedule in February 2026
- SK Group Chairman and Jensen Huang maintain a close personal relationship
- All HBM supply through 2026 allocated largely to Nvidia
- HBM3E yields: ~80% (publicly disclosed), helping cut mass production times by 50%
Counterpoint: This concentration is both a strength and a risk. If Nvidia ever diversified supply more aggressively (as they've signaled with Samsung's HBM4 qualification), SK Hynix's premium positioning could erode. And SK Hynix is essentially a single-customer story for its highest-margin product.
Micron's HBM Position
Micron is the third player but shouldn't be dismissed:
- Qualified HBM3E for Nvidia H200 and Blackwell platforms
- HBM4 36GB 12-high entered volume production in Q1 CY2026 for Vera Rubin
- HBM4 48GB 16-high already sampled to customers
- HBM capacity sold out through CY2026 and CY2027
- New HBM advanced packaging facility in Singapore (operational CY2027)
- HBM3E yields: ~75% for 8-layer, ~70% for 12-layer
- Micron pulled forward its HBM TAM estimate to $100B by CY2028 (40%+ CAGR)
5. Future Nvidia Roadmap and Memory Supplier Positioning
Nvidia GPU Roadmap
| Platform | Launch | Memory Type | Bandwidth | Per-GPU Capacity | Key Details |
|---|---|---|---|---|---|
| Blackwell Ultra (B300/GB300) | Jan 2026 | HBM3E 12-high | 8 TB/s | 288 GB | Shipping now |
| Vera Rubin | 2H 2026 | HBM4 12-high | 13 TB/s | 288 GB | SK Hynix ~70%, Samsung ~30%, Micron in vol. production |
| Rubin Ultra NVL576 | 2027 | HBM4E | TBD | 1 TB per GPU | 4 reticle-sized chips, 16 HBM sites per GPU |
| Feynman | 2028 | HBM4E or HBM5 | TBD | TBD | 3D die stacking; new Rosa CPU platform |
Who Wins Vera Rubin? (The Most Important Near-Term Question)
Vera Rubin is the next major Nvidia GPU platform, launching 2H 2026 with HBM4. This is where the supply chain battle is most visible:
SK Hynix: ~70% allocation
- First to qualify HBM4 with Nvidia
- M15X fab operational and ramping, initially ~10,000 wafers/month scaling severalfold by year-end
- 1b DRAM output for HBM4 began February 2026
- Board approved additional KRW 21.6T (~$15B) investment through 2030
Samsung: ~30% allocation
- HBM4 samples passed Nvidia quality testing late 2025
- Mass production started February 2026
- Targeting "early delivery" to close gap with SK Hynix
- HBM4 uses 1c DRAM technology (current yield ~60% — well below SK Hynix's 80% on HBM3E)
- Pyeongtaek: >50% of foundry capacity allocated to HBM4 base die production
Micron: In volume production
- Began high-volume HBM4 36GB 12-high production in Q1 2026
- Expects to sell out entire HBM4 capacity for CY2026
- Specific allocation percentage for Vera Rubin not publicly disclosed, likely 10-15%
Rubin Ultra and Beyond: Where Samsung Could Close the Gap
Rubin Ultra (2027) is the inflection point where Samsung's integrated model could become a genuine advantage. Each GPU uses 1 TB of HBM4E memory across 16 HBM sites — an enormous amount of memory per chip. The base die in HBM4/HBM4E includes customer-specific logic, and Samsung is the only company that can manufacture both the memory dies and the logic base die in-house.
Samsung revealed HBM4E at GTC 2026, breaking the 16 Gbps barrier. More than 20 customized HBM solutions are in development. Samsung is also co-developing bufferless HBM4 with TSMC, which could broaden its customer base further.
My view: The shift from commodity memory to custom logic-integrated memory (HBM4 and beyond) should structurally favor Samsung over time, because no one else can offer full vertical integration. But "over time" may mean 2027-2028, not 2026. For 2026, SK Hynix remains dominant with Nvidia, and the allocation split (~70/30 SK Hynix/Samsung) is already locked in.
6. Beyond Nvidia: AMD, Google, OpenAI, and Custom ASICs
The HBM competitive landscape changes significantly when you look beyond Nvidia. Samsung's diversified customer base is a strategic advantage that the market may be underpricing.
AMD
| AMD Chip | Memory | Primary HBM Supplier(s) |
|---|---|---|
| MI300X | HBM3, 192 GB | Samsung (primary) |
| MI325X | HBM3E, 288 GB | Samsung, Micron |
| MI350 (MI350X/MI355X) | HBM3E 12-high | Samsung + Micron (dual suppliers, TrendForce confirmed) |
| MI400/MI450 (2026-27) | HBM4 | Samsung undergoing qualification |
Key insight: Samsung is the dominant HBM supplier for AMD, which is the inverse of the Nvidia dynamic. This diversifies Samsung's customer base and reduces its dependence on catching up with SK Hynix at Nvidia.
Google TPUs (via Broadcom)
- Samsung supplies >60% of Google's HBM3E shipments through Broadcom (2025)
- Samsung HBM4 reportedly "beat expectations" in Broadcom testing
- Samsung expected to supply >2x the volume to Google in 2026 vs. 2025
- For 2026 TPUs with HBM4: Samsung set to lead supply
OpenAI "Titan" Custom Chip (Announced March 2026)
- Samsung secured an exclusive deal to supply 800 million Gb of HBM4 to OpenAI for its "Titan" chip
- Titan: designed by Broadcom, manufactured by TSMC, launching late 2026
- Makes OpenAI Samsung's third-largest HBM4 customer behind Nvidia and AMD
- This is a significant win that demonstrates Samsung's diversification strategy is working
Customer Concentration Summary
| Supplier | Primary Customers | Concentration Risk |
|---|---|---|
| SK Hynix | Nvidia (dominant), some AMD/others | High — Nvidia is overwhelmingly the largest customer |
| Samsung | Google, AMD, OpenAI, Nvidia (~30%) | Low — most diversified customer base |
| Micron | Nvidia, AMD, broad conventional DRAM | Moderate — HBM is a smaller % of revenue |
My read: Samsung's customer diversification is the most underappreciated competitive advantage in this space. While the market obsesses over who wins the most Nvidia allocation (SK Hynix), Samsung is quietly building a dominant position with Google, AMD, and now OpenAI. If custom ASICs (Broadcom, Marvell) grow their share of AI compute relative to Nvidia GPUs — a plausible scenario given the hyperscalers' desire to reduce Nvidia dependence — Samsung's HBM share could grow faster than consensus expects.
7. The Cyclical Framework: How Memory Cycles Work
Understanding memory cycles is essential for investing in this sector. Memory is unlike most of tech — it is a capital-intensive commodity business with demand that is relatively elastic and supply that is highly inelastic in the short term. This creates violent boom-bust cycles.
What Drives the Cycle
- Demand surge (new platform: smartphones, cloud, AI) -> prices rise, margins expand
- Supernormal profits -> all three companies increase capex simultaneously
- New capacity arrives 18-24 months later (fab construction lead times)
- Supply overwhelms demand -> prices collapse, margins turn negative
- Capex cuts, possibly restructuring -> supply tightens over 2-4 quarters
- Recovery -> cycle restarts
Typical cycle duration:
- Demand-driven upcycles: 4-7 quarters
- Oversupply-driven downturns: 4-8 quarters
- Full cycle: roughly 4 years peak-to-peak
Key metrics to watch:
- DRAM inventory days: Normal is ~8 weeks. Below 4 weeks = supercycle (supply-constrained). Above 12 weeks = downcycle (oversupplied).
- Gross margins: Memory peaks above 50-60% gross margins in upcycles; troughs below 20% or negative in downturns.
- Capex trends: Rising capex sows the seeds of the next downturn. The lag is 18-24 months.
- ASP (average selling price) trends: Sequential pricing changes are the most real-time indicator.
The "Inverted P/E" Rule for Cyclical Stocks
This is the single most important investing concept for memory stocks: P/E ratios are inverted for cyclical companies.
- Low P/E (3-5x) = SELL signal. Earnings have surged faster than the stock price. This happens at the earnings peak, after which earnings collapse.
- High P/E or negative P/E = BUY signal. Earnings are at a trough. The stock is pricing in recovery that hasn't shown up in EPS yet.
- At the May 2018 cycle peak, Micron's trailing P/E was 2.96x — the lowest in its history. The stock had already fallen 30% from its price peak and was about to fall another 40%.
This is counterintuitive for investors trained on growth stocks, where a low P/E is a value opportunity. For memory stocks, a low P/E means peak earnings — and peak earnings mean the cycle is about to turn.
8. Historical Cycle Guide: Investing Through Boom and Bust
Micron Stock: Trough-to-Peak Returns Across Cycles
| Cycle | Trough Date | Trough Price | Peak Date | Peak Price | Gain | Duration |
|---|---|---|---|---|---|---|
| GFC Recovery | Nov 2008 | $1.59 | Apr 2011 | $11.95 | 651% | ~29 months |
| Elpida/Oligopoly | May 2012 | $5.00 | Dec 2014 | $36.50 | 630% | ~31 months |
| Server Supercycle | May 2016 | $9.35 | May 2018 | $64.66 | 591% | ~24 months |
| Post-COVID | Dec 2018 | $29.00 | Jan 2022 | $98.45 | 239% | ~37 months |
| HBM/AI Supercycle | Dec 2022 | $48.43 | Mar 2026 | $471.34 | 873%+ | ~39 months (ongoing) |
Key observations:
-
Average trough-to-peak gain: ~600%. The pattern is remarkably consistent across 20 years. Buy at the trough, ride the upcycle, and you historically made 5-7x your money.
-
The current cycle is already the largest. At 873% from trough, this cycle has exceeded every prior cycle's gain by a wide margin. Either HBM/AI demand is truly structurally different (justifying a larger-than-normal cycle), or we are deep into the euphoric phase.
-
Stocks peak ~2 quarters before earnings peak. In 2018, Micron's stock peaked in May while earnings peaked in Q4 (the November quarter). In 2022, the stock peaked in January while revenue peaked in Q3 (the August quarter). This lead time gives investors a window to exit, but only if they're watching for it.
-
The best buys are the ugliest moments. November 2008 (Micron at $1.59) was during the GFC when Qimonda was going bankrupt. December 2022 (Micron at $48.43) was when the company was posting its worst losses in decades (-$5.8B net loss in FY2023). These were the best buying opportunities in the last 15 years.
Detailed Cycle-by-Cycle Review
2007-2009 (GFC): The Cycle That Killed the Competition
- DRAM prices fell at a 32% compound quarterly rate in late 2007
- Post-Lehman (September 2008): prices fell at a 40% quarterly rate — an 87% annual decline
- Micron fell from ~$12 to $1.59 (-87%)
- Qimonda (Germany's last DRAM maker) went bankrupt in January 2009
- Lesson: This cycle completed the industry consolidation that made the oligopoly possible. The survivors — Samsung, SK Hynix, Micron — emerged with the structural advantage that has persisted ever since.
2016-2018 (Server Supercycle): The Template for This Cycle
This is the closest historical analogue to the current HBM/AI supercycle:
- Demand driver: Cloud/data center buildout + smartphone DRAM content increases
- Micron revenue: $12.4B (FY2016) -> $20.3B (FY2017, +64%) -> $30.4B (FY2018, +50%)
- Peak margins: Gross margin 58.9%, operating margin 49.3% in FY2018
- DRAM inventory: Fell to 3-4 weeks (below 8-week normal)
- Stock: Trough $9.35 (May 2016) -> Peak $64.66 (May 2018) = 591% gain
- Then: Revenue fell to $23.4B in FY2019 (-23%), stock crashed to $29 by December 2018 (-55%)
Parallel to current cycle: Inventory is currently at 3.3 weeks (matching 2018 lows). Margins are higher than 2018 peaks (69% operating vs. 49%). Revenue growth is faster. The demand driver (AI/HBM) is arguably more durable than the 2016-2018 cloud buildout. But the capex response is also faster — all three companies are investing at record levels.
2022-2023 (Worst Downturn in Decades): The Most Recent Trough
- DRAM revenue dropped 29% in a single quarter (Q3 2022)
- Micron FY2023: Revenue $15.5B (-49.5% YoY), net loss $5.8B, gross margin -9.1%
- SK Hynix 2023: Net loss ~$7.75B; first operating loss in 10 years
- Samsung DS 2023: Operating losses estimated 10.3-14.7 trillion won
- Inventories tripled to 3-4 months of supply
- Micron stock: $98 (Jan 2022) -> $48.43 (Dec 2022) = -51%
- The recovery: Micron returned +70.6% in 2023, +227.6% in 2025, and has continued surging into 2026
Lesson: Even in the "age of the oligopoly" with only 3 players, the downturn was devastating. All three companies posted massive losses. The oligopoly doesn't prevent downturns — it makes recoveries faster and more profitable because there's no new entrant capacity waiting in the wings.
9. Where Are We in This Cycle?
Current Cycle Indicators (March 2026)
| Indicator | Current Reading | Historical Context | Signal |
|---|---|---|---|
| DRAM inventory (weeks) | 3.3 weeks | Normal: 8 weeks. 2018 supercycle low: 3-4 weeks | Extreme tightness |
| Micron gross margin | 74.9% (FQ2), guided 81% (FQ3) | 2018 peak: 58.9%. 2023 trough: -9.1% | All-time record |
| DRAM spot prices | +171% YoY | 2022 trough: -60% YoY | Strong upcycle |
| DDR5 spot prices | Quadrupled since Sept 2025 | — | Supercycle |
| DRAM contract prices (Q1 2026) | +55-60% QoQ | Normal: flat to +5% | Extreme |
| Capex (all 3 companies) | Record levels ($65B+ combined 2026) | 2018: $30B combined | Cycle warning |
| HBM capacity sold out | Through CY2027 | No historical precedent | Structural demand |
| LPDDR5X lead times | 26-39 weeks | Normal: 8-12 weeks | Severe shortage |
The Bull Case for "This Time Is Different"
-
HBM is not a commodity. It requires 3x the wafer capacity per bit, has customer-specific logic dies (HBM4+), takes 6-12 months to qualify, and can't be easily switched between suppliers. This is closer to a design win than a commodity sale.
-
AI demand is additive, not substitutional. HBM for AI accelerators is a new demand category on top of existing PC, mobile, server, and auto DRAM demand. LLM context windows (128K-1M tokens) require 40GB-40TB of memory per million concurrent users. Video generation models need 25x more memory than image models.
-
Supply constraints are structural. New fabs take 3-5 years to build. Micron's Idaho fab won't produce volume until 2H 2027. Even with maximum investment, meaningful new supply won't arrive until 2028-2029.
-
The oligopoly is more disciplined. Three players controlling 95% of supply can coordinate capex and pricing more effectively than 20 fragmented competitors. Long-term supply agreements (especially for HBM) provide visibility that didn't exist in prior cycles.
-
HBM4's custom base die kills fungibility. When the base die includes customer-specific logic, the memory becomes a bespoke product. You can't dump excess Samsung HBM4 designed for OpenAI onto the Nvidia market. This reduces the risk of classic oversupply dynamics.
The Bear Case for "Cycles Always Win"
-
Capex is rising to record levels. Combined 2026 capex for the Big 3 will exceed $65B, more than double the 2018 peak. Every prior cycle peaked after a capex surge.
-
Algorithmic efficiency is a real risk. Google's TurboQuant (announced March 25, 2026) reduces AI inference KV cache memory requirements by 6x. Memory stocks fell 5-12% on the news. While TurboQuant only targets inference KV cache (not model weights or training), it demonstrates that software efficiency gains can erode hardware demand.
-
81% gross margins are unsustainable. No hardware business sustains 80%+ gross margins indefinitely. Either customers push back on pricing, competitors cut prices to gain share (Samsung is already doing this), or demand moderates.
-
AI capex may plateau. If hyperscaler AI spending levels off — because models reach efficiency limits, because ROI doesn't materialize, or because regulation intervenes — the demand driver weakens while new supply arrives.
-
The stocks are already up 600-873% from trough. Historically, that's where memory cycles peak. You can argue "this time is different," but the last five times people said that about memory, it wasn't.
My Assessment: Mid-to-Late Upcycle
We are in the strongest memory upcycle in history, but the cycle indicators are running hot. Inventory at 3.3 weeks, margins at 75% and guided higher, capex at record levels, prices up 55-60% QoQ — these are all signs of a market operating well above equilibrium. Historically, this is the phase where the trade becomes crowded and the risk/reward shifts.
The key difference from prior cycles is that HBM demand has structural underpinnings that conventional DRAM demand never had. If AI compute continues scaling — and the massive capex commitments from hyperscalers suggest it will through at least 2027 — then this cycle could extend 12-18 months beyond where a "normal" cycle would have peaked.
The key similarity to prior cycles is that capex is surging in response to supernormal profits, and new capacity will eventually arrive. The question isn't whether there will be another downturn — it's when, and how severe.
10. Valuation: What You're Paying
Current Multiples (March 27, 2026)
| Metric | Micron (MU) | SK Hynix (000660) | Samsung (005930) |
|---|---|---|---|
| Stock Price | ~$356 | KRW 933,000 | KRW 180,100 |
| Market Cap | ~$401B | ~$428B | ~$842B (full company) |
| Trailing P/E | 18.7x | ~14-18x | ~30-34x |
| Forward P/E (consensus) | ~6.4x | ~5.3x | ~8.4x |
| P/B | 7.9x | 6.0x | ~1.5-3.8x |
| EV/EBITDA | ~6.9-10.8x | ~8.1x | ~11.0-13.1x |
| 52-Week Return | ~+120% | ~+49% | ~+216% |
| Trough-to-Now (Dec 2022) | +635% | — | — |
Valuation vs. Historical Ranges (Micron)
| Metric | Current | Historical Low | Historical Median | Historical High |
|---|---|---|---|---|
| P/B | 7.9x | 0.81x (cycle trough) | 1.86x | 8.03x |
| Trailing P/E | 18.7x | 2.96x (2018 peak earnings) | 20.02x | 136.5x (recovery) |
| Forward P/E | ~6.4x | — | ~12x | — |
What this tells you:
-
P/B of 7.9x is near the all-time high (8.03x). Historically, buying Micron above 4x P/B has resulted in poor 2-year returns. The 0.81x trough is where the monster gains started.
-
Forward P/E of 6.4x looks "cheap" — but remember the inverted P/E rule. Low forward P/E on a cyclical stock means earnings are at or near peak. The market is telling you it doesn't believe these earnings are sustainable.
-
Samsung at 8.4x forward P/E with a 1.5-3.8x P/B range is the cheapest name on paper, but the conglomerate structure (memory + foundry + display + mobile + appliances) makes direct comparison difficult. The memory-only business, if separated, would likely trade at a much higher P/B.
The "When Should I Sell?" Signals
Based on 20 years of memory cycle data, the following are historically reliable warning signs:
- P/B ratio above 6x (Micron currently at 7.9x — already in the danger zone)
- Gross margins above 60% (currently 75% and guided to 81% — deep in the danger zone)
- Capex rising >50% YoY (Micron FY2026 capex up ~40% and guided higher)
- All analysts bullish (30 out of 30 analysts rate MU Buy/Strong Buy)
- "This time is different" narratives dominating (check — HBM structural demand story)
- Stock price flattening despite improving fundamentals (Micron hit ATH of $471 on March 18, then pulled back to ~$356 on Google TurboQuant — a potential divergence)
11. Bull and Bear Cases by Company
SK Hynix
Bull Case:
- 70% of Nvidia's HBM4 orders = toll road on AI compute
- 58% operating margin with room to expand as HBM4 scales
- First-mover on every HBM generation creates compounding advantage
- Potential US listing (H2 2026) could unlock $10-14B and re-rate the stock
- Forward P/E ~5.3x on record earnings
- $10B AI solutions company investment could diversify beyond commodity memory
Bear Case:
- Nvidia customer concentration — if Nvidia diversifies supply or loses GPU share to custom ASICs, SK Hynix disproportionately loses
- Korean corporate governance discount (SK Group controlling shareholder dynamics)
- At 6x P/B, stock is priced for continued supercycle
- HBM4 yield advantage erodes as Samsung and Micron catch up
- If the cycle turns, the stock has further to fall from current levels than from a more reasonable valuation
My view: SK Hynix is the "quality premium" play — the best-positioned company at the highest valuation. If the supercycle extends through 2027-2028 as HBM4E and Feynman ramp, SK Hynix will likely outperform. If the cycle peaks in 2026, it will correct hard from these levels.
Samsung
Bull Case:
- Most diversified HBM customer base: Google (60%+), AMD (primary), OpenAI (exclusive Titan deal), Nvidia (~30%)
- Only company with integrated memory + foundry + advanced packaging
- HBM4 mass production underway; on track for ~28% HBM market share in 2026
- Foundry losses narrowing (KRW 800B in Q4 2025 vs. KRW 2T+ earlier) — eventual breakeven could re-rate the stock
- Tesla $16.5B 2nm AI chip contract validates foundry competitiveness
- Cheapest on P/B basis (~1.5-3.8x) due to conglomerate discount
- Samsung's HBM4E (shown at GTC 2026) positions it for the 1 TB Rubin Ultra era
- Price: KRW 180,100 vs. peak of ~KRW 95,000 two years ago — up 90% but with much further to run if HBM thesis plays out
Bear Case:
- 18-month HBM3E qualification failure demonstrated execution risk that could recur
- HBM4 yields at ~60% vs. SK Hynix's ~80% on HBM3E — yield gap needs to close
- Foundry division is a capital furnace that may not reach profitability until 2027+
- Conglomerate discount is rational — the mobile and display divisions face secular challenges
- Samsung's aggressive 30% HBM3E price cuts signal it's buying share, not earning it — margin-destructive for the industry
- CHIPS Act funding reduced from $6.4B to $4.7B after Samsung cut Texas investment from $44B to $37B
My view: Samsung is the contrarian play. It's the cheapest on a sum-of-parts basis, has the most diversified HBM customer base, and is the only company where the market is underweighting the HBM recovery story because the consolidated financials are diluted by foundry losses. If you believe (a) Samsung's foundry eventually breaks even, (b) HBM4 yields improve to competitive levels, and (c) custom ASICs grow as a share of AI compute (benefiting Samsung's Broadcom/Google/OpenAI relationships), then Samsung offers the best risk-adjusted return. The risk is that Samsung's culture of over-investment and execution stumbles continues.
Micron
Bull Case:
- Purest financial exposure to memory — no foundry drag, no conglomerate discount
- 69% operating margin (FQ2 2026), guided to ~81% gross margin in FQ3 — best-in-class
- $6.4B CHIPS Act funding + $200B US investment = geopolitical premium
- HBM capacity sold out through CY2027
- FQ3 2026 guidance implies annualized revenue of $134B — stock at 3x forward annualized revenue
- Analyst consensus target $443-463, with bulls at $500-700
Bear Case:
- Smallest HBM market share (~18%) — least leverage to the highest-growth segment
- P/B of 7.9x is near all-time highs — historically the worst time to buy
- Capex ramping to $25B+ (FY2026) and "meaningfully higher" in FY2027 — classic cycle-peak behavior
- Idaho and New York fabs are multi-year projects that add enormous capacity in 2028-2030 — exactly when oversupply risk materializes
- 30 of 30 analysts at Buy/Strong Buy — no skeptics left to convert
- Recent 24% pullback from ATH ($471 -> $356) on Google TurboQuant may be the beginning of the stock diverging from earnings (the classic late-cycle signal)
My view: Micron is the best-operated memory company and the cleanest way to play the cycle. But it's also the stock most at risk of the classic "peak earnings, peak multiple" trap. The forward P/E of 6.4x looks cheap until you remember that Micron's forward P/E was 2.96x at the May 2018 peak — and the stock fell 55% from there. The 81% gross margin guidance for FQ3 is extraordinary, but margin peaks have historically coincided with stock price peaks. If you're adding exposure now, you need to believe the supercycle extends materially beyond the 2018 template.
12. How to Invest in This Cycle: A Framework
Lessons from Five Cycles of Memory Investing
Lesson 1: Buy when they're losing money.
The best entry points for memory stocks have always been when the companies are reporting losses and the P/E ratio is negative or undefined. Micron's 5 best buying opportunities over 20 years were all during periods of negative EPS. Average subsequent return: ~600% over 24-31 months.
Lesson 2: P/B below 1x is the clearest buy signal.
Micron's P/B hit 0.81x at the December 2022 trough. Every time P/B has approached or gone below 1x, it has been an exceptional buying opportunity. We are nowhere near that now (7.9x).
Lesson 3: The stock peaks before earnings peak.
In 2018: stock peaked May, earnings peaked Q4 (November quarter). In 2022: stock peaked January, revenue peaked Q3 (August quarter). The lead time is typically 1-2 quarters. If Micron's $471 ATH on March 18 was the stock price peak, earnings should peak around Q3-Q4 2026 (the August-November quarters).
Lesson 4: Rising capex is the leading indicator of the next downturn.
The lag is 18-24 months. With combined Big 3 capex at $65B+ in 2026 and rising, the supply response begins arriving in meaningful volume in 2028. This aligns with the 2027-2028 timeframe that some analysts are flagging as the potential cycle peak.
Lesson 5: The oligopoly makes recoveries faster, not downturns milder.
The 2022-2023 downturn was the worst in decades despite the 3-player structure. All three companies posted massive losses. The oligopoly's benefit is on the recovery side — fewer competitors means faster supply discipline and quicker margin recovery.
Applying This to the Current Cycle
If you don't own these stocks and want exposure:
- You're late to this cycle. The easy money (600%+ returns from the 2022 trough) is already made.
- Samsung offers the best risk-adjusted entry point: cheapest valuation, most diversified HBM customer base, and the most underappreciated recovery story. The conglomerate discount provides downside cushion.
- For Micron and SK Hynix, the risk/reward has shifted. You're buying near historical P/B highs into what is already the longest and largest memory upcycle on record. The potential upside is real (if the supercycle extends to 2028), but the historical pattern says the stocks are in the zone where peaks happen.
If you already own these stocks:
- The cycle indicators are flashing late-stage. Margins at 75% and guided to 81%, inventory at 3.3 weeks, capex at records, all analysts bullish — this is the euphoric phase.
- Consider trimming into strength rather than selling outright. The HBM structural story could extend this cycle 12-18 months beyond historical norms, but the risk/reward is no longer asymmetric to the upside.
- Watch for the stock-to-earnings divergence: if stocks start falling or flattening while earnings continue rising, that's the most reliable exit signal in memory cycles. Micron's pullback from $471 to $356 (-24%) after hitting ATH on March 18 is worth monitoring closely — it could be noise (Google TurboQuant), or it could be the beginning of the divergence.
If you're waiting for the next cycle trough:
- Based on historical patterns and the 18-24 month capex lag, the next downturn could arrive in the 2028-2029 timeframe.
- The buy signals to watch for: P/B approaching 1x, negative EPS quarters, industry inventory above 12 weeks, multiple quarters of capex cuts, analyst downgrades shifting to majority Sell/Hold.
- If the pattern holds, the next trough will offer another 400-600%+ return opportunity over the subsequent 24-36 months.
What Could Make This Cycle Genuinely Different
There are structural arguments that the HBM/AI demand cycle breaks the historical pattern:
-
HBM4's custom base die makes each unit semi-bespoke. If memory becomes more like a design win and less like a commodity, pricing power becomes stickier and oversupply dynamics soften.
-
AI model size is still growing exponentially. GPT-4 required ~1.8 trillion parameters. GPT-5 and successors will require more. Each model generation demands more memory. This demand curve is steeper than anything in memory history.
-
Inference is scaling faster than training. As AI models are deployed to billions of users, inference memory demand could dwarf training demand. Every ChatGPT query, every autonomous vehicle decision, every AI agent action requires memory.
-
Geographic diversification (CHIPS Act, reshoring) adds demand. US, EU, Japan, and Korea are all subsidizing domestic memory production, which means excess demand for equipment and capacity even if end-demand softens.
However, every structural argument for "this time is different" has been made before in memory. In 2017, the bull case was that cloud buildout was a permanent demand shift that would end the cycle. It didn't. In 2006, the bull case was that smartphones were a permanent demand shift. They were — but the cycle still happened. The structural demand driver was real each time; the cycle dynamics still overwhelmed it.
The honest answer: HBM is structurally different from prior memory demand drivers in degree, but probably not in kind. The cycle will likely be longer and higher than normal, but it won't be permanent. Position sizing, trimming discipline, and cycle awareness remain essential.
HBM Market Size Projections
| Year | HBM Market Size | Growth | Source |
|---|---|---|---|
| 2024 | ~$20B | — | Industry estimates |
| 2025 | ~$35B | +75% | Yole Group, Micron |
| 2026E | ~$55B | +57% | Bank of America |
| 2027E | ~$75B | +36% | Extrapolated |
| 2028E | ~$100B | +33% | Micron, pulled forward |
| 2033E | ~$130B | — | Bloomberg Intelligence |
Capex Comparison (2026 Estimates)
| Company | 2026 Capex | YoY Change | Focus |
|---|---|---|---|
| Micron | >$25B | +40% | 1-gamma node, HBM4, US fabs |
| SK Hynix | ~$20-21B | +17-24% | M15X HBM4, Indiana packaging |
| Samsung (total semi) | ~$73B | ~2x YoY | Memory + foundry + R&D |
| Samsung (memory only) | ~$20B | +11% | 1c process HBM, Pyeongtaek expansion |
| Combined Big 3 (memory) | ~$65B+ | — | — |
Technology Node Comparison
| Node | Samsung | SK Hynix | Micron |
|---|---|---|---|
| 1-alpha (1a) | Mass production | Mass production | Mass production |
| 1-beta (1b) | Mass production | Mass production | Mass production |
| 1-gamma (1c) | Ramping for HBM4 | Leading with EUV (5+ layers) | Transitioning; 1b freed for HBM |
Key Data Sources
- Micron Q2 FY2026 Earnings Release
- SK Hynix FY2025 Financial Results
- Samsung Q4/FY2025 Earnings
- TrendForce: HBM market share, pricing, qualification reports
- Counterpoint Research: HBM4 market share forecast
- Bank of America: 2026 HBM market estimate ($54.6B)
- UBS: SK Hynix 70% Nvidia HBM4 allocation
- Samsung HBM3E qualification history (KED Global)
- Samsung OpenAI Titan HBM4 deal (Android Headlines)
- Google TurboQuant impact analysis (TrendForce)
- Micron historical P/B and P/E ratios (GuruFocus, MacroTrends)
- Memory cycle history (Uncover Alpha, Nomad Semi, SemiWiki)
- Nvidia Vera Rubin platform details (Nvidia Developer Blog)
- DRAM spot and contract pricing (TrendForce, Tom's Hardware)