As an Amazon Associate I earn money from qualifying purchases.

Friday, September 5, 2025

Nvidia GeForce RTX 40-Series Graphics Card Performance Hierarchy

Ada Lovelace delivered massive efficiency improvments, but needed the framegen crutch to really "boost" performance


Everyone was looking forward to the Nvidia GeForce RTX 40-series graphics cards. After two horribly long years of shortages caused by cryptomining, along with the influenece of the Covid pandement, GPU prices and availability had finally started to settle down by the summer of 2022. Ethereum mining was dead, and rumors were that Nvidia was pulling out all the stops with its next-generation Ada Lovelace architecture. Smaller lithography courtesy of TSMC's cutting-edge 4N (4nm Nvidia) process node would leapfrog what AMD was planning. Everything seemed like it would be great.

When the RTX 4090 arrived on October 12, it made no excuses. It offered a substantial 50–75 percent performance improvement over the previous generation RTX 3090, only cost $100 more, and while the supply was definitely constrained, at least cryptominers weren't snapping up every GPU. The RTX 4080 followed a month later, on November 16, but by then things were turning sour. Nvidia had pre-announced the 4080 16GB and 4080 12GB variants, but the two cards had virtually nothing in common, and people were pissed. Also, prices were seemingly influenced by the cryptoboom, even though that had ended. $1,199 for the 4080 took the overpriced 3080 Ti's place, while the 4080 12GB was going to cost $899 while offering far less in the specs and performance departments.

Well, one thing lead to another and Nvidia axed the 4080 12GB name and changed that to the 4070 Ti — with a $100 price cut. But it was still clear that the rest of the RTX 40-series cards wouldn't quite manage to live up to the lofty results posted by the 4090. The 4080 was certainly faster than the 3080, by 50–70 percent, depending on the game and settings used. But it was also 70% more expensive than its predecessor. Alternatively, it was about 35% faster than the RTX 3080 Ti, which was still a pretty decent result.

But the real issue was the 4090. It cost 33% more than the 4080 and offered nearly linear performance gains, particularly at 4K. If you had the money for a 4080, why not just spend $400 more and get the 4080? Of course, the answer was simple: There weren't enough 4090 cards to go around, and they were continually on back order. Also, the 16-pin 12VHPWR connectors were melting, by the thousands, and Nvidia was blaming "user error." No, it wasn't user error, but Nvidia didn't want to recall tens of thousands of graphics cards to fix the problem.

And there was also another new problem rearing its head for top-tier gaming GPUs: Artificial Intelligence. New apps like ChatGPT were proving to be incredibly potent and popular, and it seemed every company wanted in on the AI game. Also, the 4090 couldn't be sold (directly) to China, which lead to black market operations to get the GPUs and sell them via back channels for up to three times their retail non-China cost. Instead of costing $1,600, the RTX 4090 was often selling for $2,500–$3,000. That had a knock-on effect for the RTX 4080 as well, which often sold for $1,400 or more instead of $1,200.

The good news was that, as the rest of the Ada Lovelace lineup came out, pricing and retail availability ended up being far better than the top two GPUs. Many felt the 4070 Ti was too expensive compared to the previous generation 3070 Ti — it cost $200 more — but it did offer a solid 50% or higher boost to performance. And that was without the newly minted frame generation, aka DLSS 3.

That warrants a lengthier discussion, but Nvidia put a lot more effort into boosting AI compute on the RTX 40-series, and then it added hardware to double (potentially) framerates via framegen. Uptake was relatively quick for a new gaming and graphics technology, but the resulting performance "gains" were more smoke and mirrors. Many games would only see a 30–50 percent increase in "framerates," but half of the framegen frames came with no new user input sampling. That meant that you would often see theoretically smoother framerates, but it could often feel worse. And of course, below about 80 FPS, the wheels fall off and framegen starts to feel highly questionable. That didn't stop Nvidia's marketing efforst, sadly, and it doubled down with MFG, multi-frame generation, for the RTX 50-series. <Sigh>

Ultimately, the RTX 40-series was a great step forward in performance and power efficiency, but also a step sideways or backward on VRAM and pricing. While the 4090 was plagued by Meltgate, insufficient supplies, and higher prices, most of the RTX 40-series cards had plenty to offer at moderately reasonable prices. Well, except for the 4060 Ti and 4060, but I'll get to those in a moment. First, let's show the performance and efficiency data.

Here's the sortable table of performance, power, and efficiency alongside the specs. Prices are approximate cost of buying one of these GPUs off of eBay right now, since they're all officially discontinued. The prices and resulting "value" (in performance per dollar) are still plenty attractive, especially since the newer 50-series parts don't actually improve features or performance as much as the 40-series did relative to their predecessors.

Nvidia GeForce RTX 40-Series Ada Lovelace GPU Performance
Graphics Card Price (MSRP) Overall
Performance
Value
(FPS/$)
4K Ultra 1440p
Ultra
1080p
Ultra
1080p
Medium
Power
(Watts)
Efficiency
(FPS/W)
Specifications
GeForce RTX 4090$2,103 ($1,600)123.00.05870.8115.3145.6192.6327.60.375AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W
GeForce RTX 4080 Super$918 ($1,000)101.40.11053.392.7123.9172.8257.70.393AD103, 10240 shaders, 2550MHz, 16GB GDDR6X@23Gbps, 736GB/s, 320W
GeForce RTX 4080$829 ($1,200)99.10.12051.790.5121.0170.2253.20.391AD103, 9728 shaders, 2505MHz, 16GB GDDR6X@22.4Gbps, 717GB/s, 320W
GeForce RTX 4070 Ti Super$736 ($800)87.80.11945.179.4108.0153.7252.40.348AD103, 8448 shaders, 2610MHz, 16GB GDDR6X@21Gbps, 672GB/s, 285W
GeForce RTX 4070 Ti$558 ($800)82.00.14740.874.0102.2146.7231.30.355AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W
GeForce RTX 4070 Super$546 ($600)76.40.14037.568.896.0137.9199.30.384AD104, 7168 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 220W
GeForce RTX 4070$458 ($550)65.90.14432.158.782.6121.0183.30.359AD104, 5888 shaders, 2475MHz, 12GB GDDR6X@21Gbps, 504GB/s, 200W
GeForce RTX 4060 Ti 16GB$415 ($450)50.80.12223.944.964.796.0147.60.344AD106, 4352 shaders, 2535MHz, 16GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 4060 Ti 8GB$274 ($400)41.80.15314.537.858.396.3132.40.316AD106, 4352 shaders, 2535MHz, 8GB GDDR6@18Gbps, 288GB/s, 160W
GeForce RTX 4060$246 ($300)33.50.13611.430.146.878.2116.90.287AD107, 3072 shaders, 2460MHz, 8GB GDDR6@17Gbps, 272GB/s, 115W
GPU Testbed
AMD Ryzen 7 9800X3D CPU
Asus ROG Crosshair 870E Hero
G.Skill 2x16GB DDR5
Crucial T705 4TB SSD
Corsair HX1500i PSU
Cooler Master 280mm AIO

I ran every test on every GPU, which does make for some stupid results at times. You generally don't buy a card like the RTX 4090 for 1080p gaming, and similarly you don't pick up an RTX 4060 to trying 4K ultra gaming. But the resulting overall performance figures are at least accurate. Nvidia will of course take exception with the lack of upscaling and framegen results, but as I've said in other articles, I view those as performance enhancers and not baseline measurements. Yes, DLSS upscaling can look good enough that I'll often enable it. Framegen is a different matter, and I usually leave it off.

Nvidia never made a new halo card for the RTX 40-series, leaving the RTX 4090 untouched during the two years of its reign. There were all sorts of rumors about a potential new Titan, or a quad-slot RTX 4090 Ti, but given the melting adapter problems and the lack of competition from AMD — plus the already high demand for Nvidia's GPUs, gaming and data center both — it just never materialized. Taking everything in aggregate, meaning the 1080p numbers pull the 4090 down a bit, it's still 21% faster than the next closest GPU. And if you only look at the 4K ultra results, it's a very wide 33% gap over the 4080 Super.

That's a big step down, potentially for less than half the price — even now the RTX 4090 continues to sell for over $2,000, used! The RTX 4080 Super and RTX 4080 are basically the same card, with just a few minor tweaks on the 4080 Super to boost performance a few percent, along with a $200 price cut. Except, now that both cards are officially discontinued, the 4080 Super tends to cost about $90 more than the vanilla 4080, and you're really not going to notice a 2% performance improvement. But then, you also get a card that was released in 2023 rather than 2022 I guess.

The RTX 4070 Ti Super, along with winning the award for the longest GPU name, offers the same 16GB of VRAM as the 4080, for another $100 price cut (going by current eBay prices). It does get a similar reduction in GPU shader counts, however, so in performance per dollar it ends up a wash. And when that's the case, I'll generally recommend buying the faster card, since the price doesn't include the cost of the rest of the PC.

Starting with the RTX 4070 Ti, we have three cards all saddled with 12GB of VRAM on a 192-bit interface. In general, that's still "enough" for most tasks, but we're definitely starting to see limitations with 12GB cards. Look at the 4070 Ti Super vs the 4070 Ti for example. Overall there's a 7.1% performance advantage for the Ti Super, even though it has 33% more VRAM and memory bandwidth. But at 1080p medium it's a 4.8% lead, 5.6% at 1080p ultra, 7.3% at 1440p ultra, and 10.5% at 4K ultra. There are also several games in my test suite that show more substantial differences, especially in minimum FPS, where the extra VRAM results in minimums that are over 20% higher.

The 4070 Ti was never a great card, of course, and it was made even less desirable when the 4070 Super arrived. The Ti beats the Super by about 7%, while its MSRP was 33% higher. The 4070 Super in turn looks quite good compared to the vanilla 4070, offering about 16% higher performance for only 9% more money. Real-world prices of course corrected for that, which is why these days the 4070 Super costs 19% more on eBay, while the 4070 Ti only costs 2% more than the Super.

Then we get to the problem children, the RTX 4060 Ti and RTX 4060. The 4060 Ti did become available with both 8GB and 16GB variants, but the vast majority of cards sold were the 8GB model. That was a slap in the face to gamers everywhere. The RTX 3060 was perhaps the best balanced mainstream card Nvidia ever made, and rather than learning from how well that GPU did, Nvidia instead castrated the 4060 Ti with only a 128-bit interface and either 8GB or 16GB VRAM.

What does 16GB do for gaming? At 1080p medium, the difference between the two cards is pretty negligible. But now? Well, Indiana Jones and the Great Circle won't even let you attempt to run ultra settings without at least 10GB on Nvidia GPUs. (Oddly, it works okay on AMD and Intel 8GB cards!) So that's an immediate penalty for the 8GB model. That results in an 11% win for the 16GB card, even at 1080p ultra. But then when we hit 1440p ultra, the 8GB card often struggles with lack of VRAM, increasing the overall margin of victory to 19%. And at 4K ultra? Well, the 8GB card has a few instances (Stalker 2, especially) where it just collapses down to nothing. The 16GB card mostly continues to chug along at playable performance, particularly with DLSS enabled.

I'm not as down on the vanilla RTX 4060, simply because it was less expensive than the outgoing 3060 12GB. It still sucks that it has 8GB, the same amount of VRAM than the 1070 shipped with back in 2016, but I don't expect anyone to seriously plan on gaming at maxed out settings and 1440p or 4K with such a GPU. The worst thing about the 4060 is that it apparently sold well enough that Nvidia didn't learn the lesson, so that the replacement RTX 5060 arrived two years later with the exact same 8GB of VRAM. What was barely excusable in 2023 has become laughable in 2025, but that's a story for the Blackwell 50-series GPU hieararchy.

Here are the charts for the overall performance tests of the RTX 40-series. Once I've wrapped up all the individual families, I'll see about putting together the official monolithic GPU benchmarks hierarchy to showcase how the past four generations of Nvidia, AMD, and even Intel GPUs stack up. Until then, here are the Ada results.

Nvidia Ada Lovelace RTX 40-Series 1080p medium GPU benchmarks

Nvidia Ada Lovelace RTX 40-Series 1080p ultra GPU benchmarks

Nvidia Ada Lovelace RTX 40-Series 1440p ultra GPU benchmarks

Nvidia Ada Lovelace RTX 40-Series 4K ultra GPU benchmarks

No comments:

Post a Comment