As an Amazon Associate I earn money from qualifying purchases.

Monday, September 8, 2025

RTX 5090 vs 4090 vs 3090/Ti vs Titan RTX: Comparing Four Generations of Nvidia's Fastest GPUs

How do Nvidia's fastest GPUs of the past four generations stack up, seven years after the Titan RTX launch?

Seven years ago, Nvidia released the Titan RTX — the last of the Titans, with the xx90 series GPUs inheriting the crown. People like to complain about how expensive the halo GPUs are, but it's nothing new. The Titan RTX launched at $2,499, which no GeForce card has ever (officially) matched. Of course, it offered some other extras, like improved professional graphics support. The Titan RTX might seem like a terrible deal compared to the RTX 2080 Ti, at twice the price for a minor performance bump and more than double the VRAM, but compared to the Quadro RTX 8000 and Quadro RTX 6000, it was one-third the cost with most of the other features intact.

I digress. The halo GPUs from Nvidia have long since ceased to compete as a value proposition, though arguably the 5090 and 4090 are better "values" than the step down 5080 and 4080 for the most recent generations. We've put together full GPU performance hierarchies for the Blackwell RTX 50-series, Ada Lovelace RTX 40-series GPUs, and Ampere RTX 30-series, and we're working on a Turing RTX 20-series hierarchy. But in the meantime, with testing of the Titan RTX complete, we wanted to look at these top-tier GPUs to see the progression over the past seven years.

Games have changed quite a bit over the past several years, with VRAM requirements trending upward and the use of more advanced rendering techniques becoming increasingly common. All of these GPUs have ample amounts of memory, however, so that's not really a problem. In fact, everything except for the RTX 5090 has 24GB of VRAM, with the 5090 bumping that to 32GB, so we're really looking at how much compute and memory bandwidth have helped improved performance at the pole position for each graphics card architecture.

It's worth pointing out that memory bandwidth has increased increase by 167% going from the Titan RTX to the RTX 5090. GPU compute meanwhile has improved by 543% over that same period. That's an extreme amount of compute for FP32 graphics calculations, and that's actually not the biggest improvement. For AI workloads, even discounting FP4 support for now, the Titan RTX only offered 130 TFLOPS of FP16 compute. The RTX 5090 pushes 1676 TFLOPS of FP8 today (including sparsity), a nearly 13X boost in AI computational horsepower. No wonder Nvidia is dominating the AI space...

We're running our standard suite of 15 games, at 1080p medium/ultra, 1440p ultra, and 4K ultra. We're not using upscaling or frame generation, and only three of the games (20%) have ray tracing enabled, which feels like a pretty reasonable assessment of the technology. Obviously, the older the GPU, the more it's going to struggle at 4K and 1440p ultra. Conversely, the newer GPUs are going to hit CPU bottlenecks at 1080p.

You can sort the table by any of the resolutions (or other columns) to help highlight the areas you're most interested in comparing. Prices are from average sold eBay listing for the past month on everything except the RTX 5090, where we use the lowest reputable retail outlet (e.g. Amazon, Newegg, B&H Photo, and similar). Not surprisingly, the eBay value of the Titan RTX has dropped a lot from the original launch price.

Nvidia Titan RTX, RTX 3090, RTX 3090 Ti, RTX 4090, and RTX 5090 Compared
Graphics Card Price (MSRP) Overall
Performance
Value
(FPS/$)
4K Ultra 1440p
Ultra
1080p
Ultra
1080p
Medium
Power
(Watts)
Efficiency
(FPS/W)
Specifications
GeForce RTX 5090$2,280 ($2,000)146.30.06495.3141.2163.9207.9416.30.352GB202, 21760 shaders, 2407MHz, 32GB GDDR7@28Gbps, 1792GB/s, 575W
GeForce RTX 4090$2,103 ($1,600)123.00.05870.8115.3145.6192.6327.60.375AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 3090 Ti$851 ($2,000)84.40.09945.476.7101.7143.2410.90.205GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 3090$793 ($1,500)76.80.09740.569.593.3132.2347.70.221GA102, 10496 shaders, 1695MHz, 24GB GDDR6X@19.5Gbps, 936GB/s, 350W
Titan RTX$608 ($2,500)51.80.08526.446.463.692.4281.50.184TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W
GPU Testbed
AMD Ryzen 7 9800X3D CPU
Asus ROG Crosshair 870E Hero
G.Skill 2x16GB DDR5
Crucial T705 4TB SSD
Corsair HX1500i PSU
Cooler Master 280mm AIO

If you were hoping overall performance would scale directly with the massive boost in theoretical compute, obviously that didn't happen. We never expected it would, though the gains at 4K are perhaps more impressive. In practice, both memory bandwidth and compute impact performance, as well as other architectural changes under the hood. Let's take each generation in turn.

The RTX 3090 improved upon the Titan RTX by 48% overall, with a 53% increase in 4K ultra performance. These are the two "slowest" GPUs in our lineup, so CPU limitations are the least impactful. Even the 1080p medium performance improved by 43% on average. That's the practical aspect, but what about theoretical changes? The 3090 offers 50% more memory bandwidth and 118% more FP32 compute. But that's because Ampere doubled the number of FP32 cores, and real-world performance never saw anywhere close to that level of scaling.

The 3090 Ti was a late refresh of the Ampere halo offering, at an even higher price — right before the collapse of GPU cryptocurrency mining. It's 10% faster than the vanilla 3090 overall, with a 12% increase at 4K and only an 8% improvement at 1080p medium. More critically, power use is 18% higher, so performance scaled worse than power. Basically, Nvidia cranked the dial to 11 and decided to see what would happen. It was okay for performance, bad for efficiency, and terrible for pricing.

We'll stick with the vanilla 3090 for the comparison with the 4090, since very few people purchased the 3090 Ti. Gen on gen, the 4090 offered up a theoretical 132% increase in compute with the same raw memory bandwidth, but the 40-series also got a large L2 cache that improved effective memory bandwidth. In practice, the 4090 delivered 60% higher overall performance, and 75% higher 4K ultra performance. Not bad, considering average power use also went down. The 4090 could use a lot of power, but in general it was still quite efficient.

Finally, we have the latest RTX 5090. Maximum power use has increased to 575W, with some factory overclocked cards pushing that even higher. Melting cables are a problem, again, though average power use across our full test suite is basically tied with the 3090 Ti. Compared to the 4090, the 5090 offers 78% more memory bandwidth 27% more compute. Overall performance improved by just 19%, but that's partly skewed by the 1080p numbers that really start to hit CPU limits. At 4K ultra, the 5090 still only delivers 35% more performance than the 4090, but that's at least more than just the raw compute, so the massive bandwidth increase does prove useful.

In terms of overall efficiency, the RTX 4090 actually takes top honors, with the 5090 being a slight step down. That's expected, as the two GPUs use the same TSMC 4N process node. Nvidia chased performance at the cost of power with its most recent Blackwell architecture. It's a big step down in efficiency for the Ampere and Turing generation GPUs, with the 3090 bringing modest gains compared to the Titan RTX, but then walking that back a step with the 3090 Ti.

What about the relative value proposition, looking at current prices? Interestingly, it's the 3090 and 3090 Ti that rise above the other GPUs, followed by the Titan RTX. The old Titan still handles most games fine, but not at 4K — and upscaling is definitely useful for even 1440p ultra. The 4090 meanwhile ends up being overpriced still, with the 5090 currently offering a better value.

I'll wrap up with four galleries of performance. I'm not including all the individual game charts for this roundup, as that data is shown elsewhere. Please share my content if you find this analysis useful, as I'm trying to restart the process and it's slow going. I'm a fully independent GPU reviewer with two decades of experience, and I'm hoping to bypass the corporate powers and write the content I'm interested in providing.

Thanks for reading!

Titan RTX vs 5090, 4090, 3090 Ti, 3090 at 1080p Medium

Titan RTX vs 5090, 4090, 3090 Ti, 3090 at 1080p Ultra

Titan RTX vs 5090, 4090, 3090 Ti, 3090 at 1440p Ultra

Titan RTX vs 5090, 4090, 3090 Ti, 3090 at 1440p Ultra

No comments:

Post a Comment