AMD closed Friday at $245.04. Nvidia closed at $188.63. On a per-share basis, AMD looks like the pricier stock. But per-share price tells you nothing about value. Strip away that illusion and the real comparison begins: a $397 billion market cap versus a $4.6 trillion titan. That 12x gap either represents Nvidia's earned dominance or the biggest mispricing in semiconductors.
Both companies are riding the same AI infrastructure wave. Both posted record data center revenue in their most recent fiscal years. Yet the market has decided Nvidia is worth more than eleven AMDs combined. For investors in April 2026, the question is whether that gap narrows, holds steady, or stretches even further.
AMD vs Nvidia: The Numbers That Matter
Before getting into strategy and catalysts, the financial comparison needs context. Here are the metrics that separate these two companies heading into Q2 2026:
Metric | AMD (FY2025) | Nvidia (FY2026) |
|---|---|---|
Stock Price (Apr 11) | $245.04 | $188.63 |
Market Cap | ~$397B | ~$4.62T |
Data Center Revenue | $16.6B | $193.7B |
DC Revenue Growth (YoY) | +32% | +68% |
Gross Margin | ~52% | ~75% |
PEG Ratio | ~0.8 | ~0.4 |
The table reveals an interesting paradox. AMD is growing data center revenue at roughly half the absolute scale but with a respectable +32% year-over-year print versus Nvidia's +68%, but Nvidia's gross margins are 23 percentage points higher and its PEG ratio suggests the market is actually pricing Nvidia more cheaply relative to its growth. A PEG of 0.4 means investors are paying roughly half a dollar for every percentage point of expected earnings growth at Nvidia, compared to 80 cents at AMD.
The AI Data Center Revenue Gap
Nvidia's FY2026 data center revenue hit $193.7 billion. AMD's data center segment generated $16.6 billion in FY2025. That ratio of roughly 12:1 almost perfectly mirrors the market cap ratio, suggesting the market is pricing both companies at a similar multiple per dollar of data center revenue.
AMD's data center business grew from $12.6 billion in FY2024 to $16.6 billion in FY2025. Nvidia's data center grew from $115.2 billion in FY2025 to $193.7 billion in FY2026. Both trajectories are extraordinary, but AMD is earlier on its S-curve. Growing from $16B to $30B requires capturing incremental hyperscaler contracts. Growing from $193B to $300B requires the entire AI infrastructure market to expand while maintaining dominant share. Neither is easy, but AMD's task has fewer structural ceilings.
The critical number to watch: AMD's data center revenue as a percentage of Nvidia's. At 8.6% today, any sustained move above 12-15% would signal a genuine shift in competitive dynamics. The AMD investor relations page tracks quarterly segment breakdowns where this trend is most visible.
AMD's Catalyst: The Meta $60 Billion Deal
The single biggest development in AMD's investment thesis landed in early April 2026: a $60 billion infrastructure commitment from Meta Platforms covering GPUs, CPUs based on AMD's Venice architecture, and custom silicon co-engineering. This deal validated AMD as a genuine second source for hyperscale AI infrastructure, not merely a price-pressure negotiating chip against Nvidia.
Meta's motivation was explicit. Zuckerberg's infrastructure team wanted supply diversification (Nvidia supply constraints delayed multiple Meta AI projects in 2024-2025), pricing leverage against a single-supplier dependency, and co-engineering access to optimize silicon for Meta's specific AI workloads. Bank of America's Justin Post characterized the deal as a structural shift in how hyperscalers procure AI compute.
The deal includes a warrant structure: 160 million AMD shares issued to Meta at $0.01 exercise price, with a parallel 160 million share warrant from an OpenAI deal announced in October 2025. That 320 million total share overhang represents roughly 20% dilution at full exercise, but the warrants vest gradually and only create maximum dilution above $600 per share. At today's $245, the dilution impact is partial and priced into the stock.
Nvidia's Moat: Why CUDA Keeps Winning
Nvidia's competitive advantage extends far beyond silicon. CUDA, its proprietary software development platform launched in 2006, has spent two decades building an ecosystem that AMD's ROCm alternative has barely dented. Millions of developers, thousands of enterprise applications, and virtually every major AI framework from PyTorch to TensorFlow runs on CUDA first. Porting to ROCm works, but enterprise customers choosing between "works flawlessly" and "works after optimization" tend to pay the Nvidia premium.
The Blackwell architecture shipping throughout 2026 widens the hardware gap. GB200 NVL72 racks deliver 30x inference performance versus the prior Hopper generation. But AMD is not standing still. The MI450 Instinct accelerator on TSMC's 2nm node makes AMD the first major GPU vendor to ship on that cutting-edge process, potentially beating Nvidia's Rubin (which uses TSMC N3). The Helios rack-scale solution packing 72 MI450 GPUs delivers 2,900 PFLOPS of FP4 compute with 31TB of HBM4 memory and 1,400 TB/s memory bandwidth.
Those Helios specs actually exceed Nvidia's comparable NVL144 rack on memory capacity (31TB vs 21TB) and memory bandwidth (1,400 TB/s vs 936 TB/s), though Nvidia's rack pushes higher FP4 compute (3,600 PFLOPS vs 2,900 PFLOPS). AMD also secured an Oracle partnership to deploy 50,000 MI450 GPUs in AI superclusters, demonstrating real enterprise pull beyond the Meta relationship. The competitive gap, while still meaningful, is narrower on paper than it has ever been.
Nvidia is also building a software revenue moat. CUDA Enterprise, Omniverse, and AI Enterprise licensing generated growing recurring revenue, transforming Nvidia from a pure hardware company into a platform business. AMD has no equivalent software monetization strategy. Every AMD GPU sale is a one-time hardware transaction; every Nvidia GPU sale seeds a recurring software relationship.
Valuation: Who Is Actually Cheaper?
This is where the comparison gets counterintuitive. Despite Nvidia's $4.6 trillion market cap dwarfing AMD's $397 billion, Nvidia is arguably the cheaper stock on a growth-adjusted basis.
Nvidia's PEG ratio of approximately 0.4 means investors pay roughly 40 cents per unit of expected earnings growth. AMD's PEG of approximately 0.8 means double the price per growth unit. Nvidia's 75% gross margins generate substantially more operating profit per revenue dollar than AMD's 52%, giving Nvidia greater margin of safety if the AI spending cycle decelerates.
The counterargument for AMD is about absolute upside potential. A stock at a $397 billion market cap needs to add $397 billion to double. A stock at $4.6 trillion needs to add $4.6 trillion to double. AMD capturing 20-25% of the AI accelerator market by 2028 (up from approximately 13% today) could justify a $600-800 billion market cap. Nvidia holding its current 75% market share at even higher revenue levels may already be priced in.
Growth Trajectories: What 2027-2028 Could Look Like
AMD's data center business has a plausible path to $30-35 billion annually by 2027 based on the Meta pipeline, the OpenAI commitment, and a rumored third hyperscaler deal (suspected to be Microsoft). At those revenue levels and 55% gross margins, AMD's EPS could reach $8-10, supporting a $280-350 stock price at 35x forward earnings.
Nvidia's trajectory depends on whether the Blackwell cycle sustains the $200B+ data center revenue run rate. If AI infrastructure spending continues accelerating through sovereign AI initiatives, Nvidia could hit $250-280B in data center revenue by FY2028. At 75% margins and 25-30x forward P/E, that supports $250-300 per share.
Both scenarios imply meaningful upside from current prices. But AMD's percentage upside potential (40-55% to $350) exceeds Nvidia's (30-60% to $300) from today's levels, albeit with higher execution risk. AMD still needs to prove its MI450 can compete with Blackwell at scale. Nvidia just needs to keep executing a playbook it has perfected over the past three years.
Risk Factors That Could Derail Either Stock
Shared Risks
China export controls remain the biggest wild card for both companies. AMD lost approximately $700 million in Q2 FY2025 revenue when the MI308 was initially restricted for certain Chinese customers. Nvidia has navigated escalating export rules since 2022, with each policy revision costing billions in potential revenue. A full China decoupling scenario would hit both stocks, though Nvidia's larger China exposure means greater absolute dollar impact.
The Federal Reserve's FOMC decision on April 29 could shift the valuation framework for all growth stocks. If rates stay higher for longer, the discount rate applied to future earnings increases, compressing P/E multiples across the semiconductor sector regardless of underlying fundamentals.
AMD-Specific Risks
Custom silicon from Google (TPU v6), Amazon (Trainium3), and Microsoft (Maia 2) could squeeze AMD from below. These hyperscaler-designed chips don't need to beat AMD on every workload; they just need to handle internal inference at lower cost per token. If custom silicon captures 20-25% of AI compute by 2028, AMD's addressable market shrinks.
The 320 million share warrant overhang from the Meta and OpenAI deals creates dilution uncertainty that makes earnings-per-share modeling difficult. The warrants vest over time, and partial dilution of 50-80 million additional shares by 2027 is a realistic base case that existing EPS estimates may not fully reflect.
Nvidia-Specific Risks
At $4.6 trillion, Nvidia needs to execute flawlessly just to tread water. Any deceleration in data center spending, any Blackwell yield issue, any signal that hyperscaler capex is peaking could trigger a correction amplified by the stock's massive weighting in indices and ETFs. Approximately 40% of Nvidia's data center revenue comes from just four customers. Losing even one (as Meta just partially demonstrated) creates outsized revenue risk.
The Verdict: Which Stock Deserves Your Money?
The honest answer depends on investment horizon and risk appetite.
Buy Nvidia if you want the dominant market leader with the widest moat in semiconductors, superior margins, a cheaper growth-adjusted valuation, and the de facto standard that every enterprise AI budget defaults to. The risk: paying $4.6 trillion for a company that must sustain $200B+ annual data center revenue just to justify current pricing.
Buy AMD if you believe the AI chip market is structurally big enough for a strong second player, if the Meta and OpenAI deals signal a permanent shift in hyperscaler procurement strategy, and if you want exposure to a company earlier on its growth curve with more room for multiple expansion. The risk: AMD must prove MI450 competes at scale, manage 320 million shares of warrant dilution, and fend off custom silicon competitors simultaneously.
For portfolio construction, owning both at a 2:1 Nvidia-to-AMD weighting provides diversified exposure to the AI chip cycle. Nvidia serves as the core infrastructure holding. AMD functions as the growth kicker with higher percentage upside potential and correspondingly higher execution risk. That split captures the AI investment theme without betting everything on one company's roadmap.
The Leadership Factor: Lisa Su vs Jensen Huang
Behind the financial metrics sits a less quantifiable variable: executive leadership. Jensen Huang has steered Nvidia through the most successful product cycle in semiconductor history, transforming a $150 billion gaming GPU company into a $4.6 trillion AI infrastructure platform. His decision to invest heavily in CUDA two decades before AI demanded it looks prescient in hindsight; at the time, Wall Street questioned why a GPU company needed a software platform.
Lisa Su's track record at AMD deserves equal respect. She inherited a company teetering on bankruptcy in 2014 with a $2 billion market cap. Through disciplined execution on Zen CPU architecture and a strategic pivot to data center AI, she built AMD into a $397 billion semiconductor force. The Meta deal represents the culmination of a decade-long repositioning from "budget Nvidia alternative" to "essential second source for hyperscale AI." Both CEOs are engineers by training, both make product-first decisions, and both have earned the trust of institutional investors through consistent execution against stated roadmaps.






