As an Amazon Associate I earn money from qualifying purchases.

Monday, September 8, 2025

RTX 5090 vs 4090 vs 3090/Ti vs Titan RTX: Comparing Four Generations of Nvidia's Fastest GPUs

How do Nvidia's fastest GPUs of the past four generations stack up, seven years after the Titan RTX launch?

Seven years ago, Nvidia released the Titan RTX — the last of the Titans, with the xx90 series GPUs inheriting the crown. People like to complain about how expensive the halo GPUs are, but it's nothing new. The Titan RTX launched at $2,499, which no GeForce card has ever (officially) matched. Of course, it offered some other extras, like improved professional graphics support. The Titan RTX might seem like a terrible deal compared to the RTX 2080 Ti, at twice the price for a minor performance bump and more than double the VRAM, but compared to the Quadro RTX 8000 and Quadro RTX 6000, it was one-third the cost with most of the other features intact.

I digress. The halo GPUs from Nvidia have long since ceased to compete as a value proposition, though arguably the 5090 and 4090 are better "values" than the step down 5080 and 4080 for the most recent generations. We've put together full GPU performance hierarchies for the Blackwell RTX 50-series, Ada Lovelace RTX 40-series GPUs, and Ampere RTX 30-series, and we're working on a Turing RTX 20-series hierarchy. But in the meantime, with testing of the Titan RTX complete, we wanted to look at these top-tier GPUs to see the progression over the past seven years.

Sunday, September 7, 2025

Nvidia GeForce RTX 50-Series Graphics Card Performance Hierarchy

The Nvidia Blackwell architecture mostly rehashes Ada, using the same process node. Only the RTX 5090 stands out as a major (-ly expensive) upgrade.


Welcome to the modern era of Nvidia graphics cards, courtesy of the Blackwell architecture. Except, if we're being honest here — unlike Nvidia — not a whole helluvalot has changed architecturally with Blackwell relative to Ada Lovelace. We can sum up the major upgrades quite quickly.

First, Blackwell has native FP4 support on the tensor cores, which as of yet has only been used in a handful of applications, like a special version of AI image generation using Flux built into a special version of UL's Procyon benchmark. Blackwell also offers native FP6 support, which is sort of a hybrid between FP4 and FP8 that can potentially reduce memory requirements, but our understanding is that it's not really any faster than just using native FP8 operations.

Blackwell does offer some new features for ray tracing applications, but as with all other ray tracing tools, developer uptake can often be quite slow, particularly in regards to new games using the features. An enhanced triangle cluster intersection engine allows Mega Geometry (a new buzzword from Nvidia!) to better render massively complex scenes without bogging down. It feels a bit to me like the old over-tessellation approach where Nvidia got some games (like Crysis 2 or 3, IIRC) to utilize massive amounts of tesselation... on flat surfaces! Why? Because AMD's GPUs struggled mightily with the workload so it made Nvidia's GPUs look better.

Friday, September 5, 2025

Nvidia GeForce RTX 40-Series Graphics Card Performance Hierarchy

Ada Lovelace delivered massive efficiency improvments, but needed the framegen crutch to really "boost" performance


Everyone was looking forward to the Nvidia GeForce RTX 40-series graphics cards. After two horribly long years of shortages caused by cryptomining, along with the influenece of the Covid pandement, GPU prices and availability had finally started to settle down by the summer of 2022. Ethereum mining was dead, and rumors were that Nvidia was pulling out all the stops with its next-generation Ada Lovelace architecture. Smaller lithography courtesy of TSMC's cutting-edge 4N (4nm Nvidia) process node would leapfrog what AMD was planning. Everything seemed like it would be great.

When the RTX 4090 arrived on October 12, it made no excuses. It offered a substantial 50–75 percent performance improvement over the previous generation RTX 3090, only cost $100 more, and while the supply was definitely constrained, at least cryptominers weren't snapping up every GPU. The RTX 4080 followed a month later, on November 16, but by then things were turning sour. Nvidia had pre-announced the 4080 16GB and 4080 12GB variants, but the two cards had virtually nothing in common, and people were pissed. Also, prices were seemingly influenced by the cryptoboom, even though that had ended. $1,199 for the 4080 took the overpriced 3080 Ti's place, while the 4080 12GB was going to cost $899 while offering far less in the specs and performance departments.

Well, one thing lead to another and Nvidia axed the 4080 12GB name and changed that to the 4070 Ti — with a $100 price cut. But it was still clear that the rest of the RTX 40-series cards wouldn't quite manage to live up to the lofty results posted by the 4090. The 4080 was certainly faster than the 3080, by 50–70 percent, depending on the game and settings used. But it was also 70% more expensive than its predecessor. Alternatively, it was about 35% faster than the RTX 3080 Ti, which was still a pretty decent result.