The AMD Earnings Debate - Q425
BEAR: Alright, stock’s down 8% after hours on a beat. The market’s telling you something.
BULL: The market’s telling me it had unrealistic expectations. $10.27 billion in revenue, 34% growth, record EPS—and the guide came in above Street. What exactly disappointed here?
BEAR: The trajectory. Data Center grew 39%, which sounds great until you remember Nvidia’s doing 80%+ in the same segment. AMD’s supposedly in the middle of an AI inflection, but the acceleration isn’t there.
On Architecture and the MI300 Family
BULL: Let’s talk about the actual silicon. The MI300X is a 153 billion transistor chiplet design with 192GB of HBM3—that’s 2.4x the memory capacity of an H100. For inference workloads, memory bandwidth is the bottleneck, not raw FLOPs. AMD architected this chip specifically for where the puck is going.
BEAR: Except training is still the money. And in training, you need the software stack. CUDA has fifteen years of ecosystem lock-in—every ML framework, every library, every grad student’s muscle memory. ROCm is improving, but ‘improving’ isn’t ‘winning.’
BULL: That’s the stale narrative. PyTorch 2.0 compiles to multiple backends now. The abstraction layer is moving up the stack. And look at the customer wins—OpenAI, Microsoft, Oracle. These aren’t charity purchases. They’re buying because MI300 delivers better TCO on specific workloads.
BEAR: Be specific. Which workloads?
BULL: Large batch inference. Mixture-of-experts models where you need to keep multiple experts resident in memory simultaneously. Long-context transformers where KV cache size explodes. Anything memory-bound, AMD wins on perf-per-dollar.
The Inference vs. Training Question
BEAR: But here’s my problem with the inference thesis. Training is still 70% of GPU spend. And even in inference, Nvidia’s not sitting still—Blackwell is specifically optimized for inference efficiency. The FP4 support, the smaller die options. They see the same shift you do.
BULL: Training was 70% of spend. Past tense. The ratio is inverting as models move to production. ChatGPT alone is reportedly running millions of inference queries per hour. Every deployed model needs 10-100x more inference compute than the training run that created it. AMD timed the MI300 perfectly for this transition.
BEAR: Timing is convenient in hindsight. What about MI400? MI500? Nvidia’s on a one-year cadence now. Can AMD’s roadmap keep pace, or do they fall a generation behind again like they did with Vega?
BULL: Different company. Lisa Su’s execution since 2014 has been flawless—Zen, EPYC, the TSMC relationship. The MI350 is on track for this year, MI450 for late 2026, MI500 in 2027. They’re on the same cadence now, and the chiplet architecture gives them more flexibility to iterate.
The EPYC Angle
BEAR: Let’s talk about the part of Data Center that actually makes money. EPYC is real—I’ll give you that. But server CPUs are a 30% gross margin business. The $5.4 billion Data Center number blends high-margin Instinct with lower-margin EPYC. What’s the actual GPU revenue?
BULL: They don’t break it out, but the mix is clearly improving. Non-GAAP gross margin is guiding to 55% next quarter, up from 52% for the full year. That’s not a margin profile driven by CPUs.
BEAR: Or it’s driven by the MI308 China sales, which are basically found money from inventory they’d written off. That $360 million reserve reversal flatters margins this quarter and next. It’s not recurring.
BULL: Fair. But even ex-China, the trajectory is clear. EPYC is taking share from Intel at an accelerating rate while Instinct ramps. That’s two secular growth drivers in the same segment. Name another company with that setup.
Competitive Dynamics
BEAR: The competitive setup is exactly my concern. You’ve got Nvidia above you with CUDA moat intact. You’ve got hyperscalers below you building custom silicon—Google TPUs, Amazon Trainium, Microsoft Maia. Where does AMD fit in a world where the biggest buyers make their own chips?
BULL: In the middle, where all the enterprise money is. Not every company can afford a custom silicon program. AMD is the merchant alternative to Nvidia’s monopoly pricing. That’s a huge TAM—think about every Fortune 500 company that wants AI inference at scale but can’t get Nvidia allocation.
BEAR: Assuming they can’t get Nvidia allocation. Supply’s loosening. Lead times are coming down. The ‘buy AMD because you can’t get H100s’ thesis had a window, and it’s closing.
BULL: That was never my thesis. My thesis is AMD wins on TCO for inference-heavy workloads, full stop. The memory advantage is structural. 192GB of HBM3 versus 80GB on H100 isn’t a supply dynamic—it’s an architecture choice that matters more as model sizes grow.
The HBM Supply Question
BEAR: Speaking of memory, let’s talk about the HBM supply chain. SK Hynix is basically Nvidia’s captive supplier. Samsung and Micron are playing catch-up. If AMD can’t secure enough HBM3e for MI350, the whole roadmap slips.
BULL: That’s a real risk, but AMD’s been locking in supply agreements. The earnings call mentioned they’re comfortable with procurement through 2026. And the constraint cuts both ways—if HBM is tight for everyone, it limits Nvidia’s ramp too.
BEAR: Nvidia gets priority allocation. That’s how these relationships work when you’re 80% of the market.
BULL: Which is exactly why Samsung and Micron want AMD to succeed. They need a credible alternative buyer to have any negotiating leverage with Nvidia. The incentives are aligned.
What’s It Worth?
BEAR: Even if I buy everything you’re saying, what multiple do I pay? Stock’s doubled in a year. At these levels, you need flawless execution and a perfect environment. One guide down and this thing gives back 30%.
BULL: You’re paying for 34% revenue growth, improving margins, and $2 billion in quarterly free cash flow. Management guided to 35%+ operating margins and $20+ EPS over the next few years. If they deliver that, the stock’s cheap here.
BEAR: Big ‘if.’ That’s a lot of execution against a competitor that hasn’t lost a step.
BULL: Every great investment has a big ‘if.’ The question is whether you’re getting paid for the risk. I think you are. This isn’t 2020 AMD trading at 100x earnings on a dream. It’s a company printing records every quarter with a clear path to continued share gains.
Final Word
BEAR: I’ll concede the execution has been excellent. But excellence is priced in. I need to see Instinct become a real business, not a rounding error next to Nvidia. Show me 20% GPU market share and I’ll flip.
BULL: By the time you see 20% share, the stock will be twice this price. The whole point is you’re buying the inflection before it’s obvious. $5.4 billion in Data Center, 39% growth, with the best inference architecture in the market. That’s your entry point.
BEAR: We’ll see. I’ll be here when the HBM supply deal falls through.
BULL: And I’ll be here when MI350 beats numbers and the Street finally figures out the inference story.
The Result
AMD delivered record Q4 2025 results, with revenue of $10.27 billion (+34% YoY) and non-GAAP EPS of $1.53 (+40% YoY). The Data Center segment was the primary growth engine, driven by strong EPYC CPU demand and continued Instinct GPU momentum. Full-year 2025 revenue reached a record $34.6 billion. Management guided Q1 2026 revenue to $9.8 billion, above consensus estimates, while signaling continued AI and data center momentum.
Q4 2025
Metric | Result | Change | Notes |
Revenue | $10.27b | +34% yoy | Record quarter |
Non-GAAP EPS | $1.53 | +40% yoy | Record |
GAAP Net Income | $1.51b | +213% yoy | vs $0.48B prior year |
GAAP EPS | $0.92 | +217% yoy | vs $0.29 prior year |
GAAP Gross Margin | 54% | +3pp yoy | +2pp QoQ |
GAAP Operating Income | $1.75b | +56% yoy | 17% operating margin |
Free Cash Flow | $2.08b | 20% FCF margin | Record quarter |
Stock-Based Comp (FY) | $1.64b | 4.7% of revenue | $1.15B in first 9 months |
Segment Performance
Segment | Revenue | yoy Growth | Key Drivers |
Data Center | $5.4b | +39% yoy | Record; EPYC & Instinct GPUs |
Client | $3.1b | +37% yoy | Record; Ryzen share gains |
Gaming | $950m | +50% yoy | Semi-custom SoCs, Radeon |
Embedded | $950m | +3% yoy | Stabilizing |
Metric | Result | Change | Notes |
Revenue | $34.6b | +34% yoy | Record year |
GAAP Gross Margin | 50% | +1pp yoy | vs 49% in 2024 |
Non-GAAP Gross Margin | 52% | -1pp yoy | vs 53% in 2024 |
GAAP Operating Income | $3.7b |
| 10.7% margin |
Non-GAAP Operating Income | $7.8b |
| 22.5% margin |
GAAP Diluted EPS | $2.65 |
|
|
Non-GAAP Diluted EPS | $4.17 |
|
|
The Outlook
Q1 2026 Outlook
Revenue Guidance: $9.8b (±$300M)
YoY Growth: +32% yoy
Sequential Change: -5% quarter-over-quarter (seasonal)
Non-GAAP Gross Margin: 55%
China Revenue (MI308): $100M included in guidance
Management Commentary
CEO Lisa Su characterized 2025 as a “defining year,” emphasizing AMD’s ability to scale profitably while expanding its AI portfolio. Management highlighted the company’s full-stack AI strategy encompassing GPUs, CPUs, networking, and software as a key competitive differentiator.
Su noted that the AI boom is boosting sales across AMD’s product portfolio, not just GPUs: “Server CPU demand remains very strong. Hyperscalers are expanding their infrastructure to meet growing demand for cloud services in AI, while enterprises are modernizing their data centers.”
CFO Jean Hu highlighted record non-GAAP operating income and free cash flow, reinforcing AMD’s improving operating leverage and financial discipline.
Market Reaction
Despite beating consensus estimates on both revenue and earnings, AMD shares declined approximately 8% in after - hours trading. Investors had elevated expectations heading into the report, and some viewed the Q1 guidance, while above Street consensus, as insufficient given the AI spending boom.
Positive catalysts included record revenue and EPS, expanding margins, and robust free cash flow. Concerns centered on China-related GPU sales volatility and questions about AMD’s ability to capture share from Nvidia in the AI accelerator market.