The NVIDIA Tax
Every major AI company writes a check to Jensen Huang. This essay traces exactly how much of the AI economy flows through one company's cash register.
One Company Collects Rent on the Entire AI Economy
Here is a number that should bother you. In fiscal year 2025, NVIDIA's data center revenue hit $115 billion. That money came from somewhere. It came from Microsoft, Google, Amazon, Meta, Oracle, and every startup trying to train a model.
These companies don't buy NVIDIA GPUs because they want to. They buy them because there is no viable alternative at the performance tier required for frontier AI training. The H100 and its successors are the only chips that can handle the matrix math at the scale and speed the big labs need. That gives NVIDIA something unusual in tech: a toll road with no detour.
Every dollar of AI revenue at Microsoft includes a fraction routed to NVIDIA. Every image Midjourney generates, every ChatGPT response, every Claude conversation runs on silicon that NVIDIA designed. The company doesn't build AI products. It sells the raw materials. And the markup is staggering.
That 78% margin is the tax. It means for every $100 billion NVIDIA collects from AI companies, $78 billion is gross profit. The buyers are funding NVIDIA's R&D, its next-generation chips, and its competitive moat. They are paying to make their own dependency deeper.
How Much Each AI Giant Pays
The exact GPU spend isn't in any earnings report. Companies bury it inside capital expenditure line items labeled "property, plant, and equipment." But the math isn't hard. We know total capexCapital expenditure: money spent to buy or upgrade physical assets like servers, buildings, and equipment for each company, we know what fraction goes to data centers, and we know NVIDIA's market share in AI accelerators is above 90%.
Here's what the numbers suggest for calendar year 2025.
Microsoft alone spent roughly $28 billion on NVIDIA hardware last year. That is more than the entire annual revenue of AMD. Meta, Google, and Amazon each wrote checks north of $20 billion. These four companies account for about 82% of NVIDIA's data center revenue.
The concentration runs both ways. NVIDIA depends on these four buyers. But these four buyers depend on NVIDIA's chips even more. If Microsoft stopped buying, NVIDIA's stock would drop. If NVIDIA stopped shipping, Microsoft's AI strategy would halt.
What Percentage of AI Revenue Flows to NVIDIA?
This is the question that matters for investors. If you own Microsoft stock, how much of its AI upside is really NVIDIA's upside in disguise?
Microsoft's Intelligent Cloud segment (which includes Azure AI) generated about $105 billion in revenue in fiscal year 2025. Its GPU spend was roughly $28 billion. That means 27 cents of every dollar Microsoft earned from cloud and AI went straight to NVIDIA.
Meta's effective rate is even higher. The company spent $24 billion on NVIDIA GPUs against roughly $70 billion in total ad revenue. Llama model training and the recommendation engine that drives those ad dollars both run on NVIDIA silicon.
Google's rate is lower, around 18%, because Google designs its own TPUsTensor Processing Units: custom AI chips designed by Google for its own data centers for a large portion of its AI workloads. Google is the one hyperscaler that has partially escaped the tax. That escape cost billions in chip design R&D over a decade. Nobody else has pulled it off.
Why the Tax Keeps Rising
NVIDIA's pricing power comes from two reinforcing loops.
Loop 1: CUDA Lock-in
Every AI lab has millions of lines of code written for CUDANVIDIA's proprietary programming framework for GPU computing, used by virtually all AI researchers. Switching to a competitor's chip means rewriting that code. The switching cost grows every year as codebases get larger. CUDA is not a product. It is a moat disguised as a developer tool.
Loop 2: Generation Leapfrogging
Every time a competitor gets close, NVIDIA ships a new generation. The H100 made the A100 obsolete. The B200 will make the H100 look slow. Each leap resets the competitive gap. AMD's MI300X is competitive with the H100, but NVIDIA is already selling the next thing.
The result: NVIDIA's data center gross margin has expanded from 65% in 2022 to 78% in 2025. The tax rate is going up, not down. Every generation of GPU costs more, performs more, and locks in the buyer deeper.
For the AI companies paying this tax, the math still works today. GPU spending is growing at 60% per year, but AI revenue is growing at 80%. The gap is positive. The worry is what happens when AI revenue growth slows to 30% and the GPU bill keeps climbing.
Can Anyone Stop Paying?
Every tax invites avoidance. The big AI companies are all working on it.
The escape routes are real but incremental. Google is the furthest along and still buys billions in NVIDIA GPUs. For investors, the honest assessment is that the NVIDIA tax will persist for the rest of this decade. The rate may compress slightly as custom silicon matures. It will not disappear.
How to Think About This as an Investor
If you own a basket of AI stocks, you are paying the NVIDIA tax multiple times. Microsoft, Google, Amazon, and Meta each send a slice of their revenue to NVIDIA. If you own all five, your portfolio's AI upside is heavily concentrated in one company's pricing power.
Three frameworks for thinking about it:
Framework 1: Own the Tollbooth
The simplest approach. If every AI company pays NVIDIA, own NVIDIA. The bull case is that GPU demand will grow for years and margins will hold. The risk is that the stock already trades at 35x forward earnings, pricing in significant growth. At current valuations, you need NVIDIA to keep growing data center revenue at 40%+ annually just to justify the price. That has happened for three straight years. Whether it continues depends on whether AI capex keeps accelerating.
Framework 2: Own the Tax Resisters
Google has reduced its NVIDIA dependency more than any other hyperscaler. If TPU performance continues to close the gap with NVIDIA GPUs, Google's AI margins improve while competitors' margins stay compressed. The play is that Google's custom silicon gives it a structural cost advantage in AI deployment. The risk is that TPU development is expensive and Google still buys NVIDIA GPUs for certain workloads.
Framework 3: Wait for the Tax Cut
If NVIDIA's margins compress from 78% to 60% over five years, the AI companies currently paying the tax get an immediate cost reduction. Their margins expand, their earnings grow, and their stock prices rerate. The play is to own the payers and wait for competition to lower the tax. The risk is that NVIDIA's moat holds and margins stay elevated. In that scenario, you've owned the payers and missed the tollbooth operator.
How I Built This
Every number in this essay is derived from public filings and industry estimates. The GPU spend figures are estimates based on reported capex, data center allocation ratios, and NVIDIA's disclosed customer concentration. Here are the key assumptions.