How do Nvidia's Ampere RTX 30-series GPUs stack up five years later?

Nvidia launched its GeForce RTX 30-series GPUs with the RTX 3080 10GB card on September 17, 2020 — very nearly five years ago as I write this. It followed that with the RTX 3090 one week later, and then the RTX 3070, RTX 3060 Ti, RTX 3060 12GB, and RTX 3050 8GB over the following months. A bit less than one year after the initial launch, Nvidia then released the 3080 Ti and 3070 Ti, with the final RTX 3090 Ti coming in the spring of 2022.
The GPUs were all good on paper, for the most part, but at the time of launch virtually every one of these graphics cards ended up being a massive disappointment for gamers. Pardon me if I'm dredging up old memories that might still cause PTSD, but late 2020 through early 2022 was a perfect storm of awfulness in the graphics card industry. Ethereum mining was massively profitable during portions of that time, to the point where miners were scooping up every viable GPU and were often willing to pay over triple the MSRPs. On top of that, we had the Covid pandemic causing more people to work from home — or stay home to play games — and plenty of folks were upgrading their PCs. Massive GPU shortages ensued.
But now all of that is past, and we have had two new generations of Nvidia GPUs in the interim, the RTX 40-series using the Ada Lovelace architecture arrived beginning in the fall of 2022, and then the RTX 50-series with the Blackwell architecture launched at the start of 2025. You wouldn't necessarily go out and buy a new RTX 30-series GPU today, but how do these older generation GPUs compare to the modern stuff?
I'll be answering that question once I update the full GPU hierarchy, but as a teaser, here's the performance data from the Ampere generation. Do note that I have the RTX 3060 12GB and RTX 3080 8GB models. Nvidia later added an RTX 3060 8GB card that dropped performance about 10~15 percent relative to the 12GB model, though it didn't drop the price. Then after the RTX 40-series began shipping, Nvidia pushed out a last gasp RTX 3050 6GB card that also dropped performance 10~15 percent from the 8GB model. Thumbs down on both of those! I'm also missing the RTX 3080 12GB, but that's basically the same performance (withing a couple of percent) as the RTX 3080 Ti.
Here's the sortable table of performance, power, and efficiency alongside the specs. Prices are approximate cost of buying one of these GPUs off of eBay right now. As noted above, these GPUs were heavily used for mining so I would absolutely not recommend buying a used RTX 30-series card. But if you had to? The prices and resulting "value" (in performance per dollar) are pretty decent.
Nvidia GeForce RTX 30-Series Ampere GPU Performance |
Graphics Card | Price (MSRP) | Overall Performance |
Value (FPS/$) |
4K Ultra | 1440p Ultra |
1080p Ultra |
1080p Medium |
Power (Watts) |
Efficiency (FPS/W) |
Specifications |
---|---|---|---|---|---|---|---|---|---|---|
GeForce RTX 3090 Ti | $851 ($2,000) | 84.4 | 0.099 | 45.4 | 76.7 | 101.7 | 143.2 | 410.9 | 0.205 | GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W |
GeForce RTX 3090 | $793 ($1,500) | 76.8 | 0.097 | 40.5 | 69.5 | 93.3 | 132.2 | 347.7 | 0.221 | GA102, 10496 shaders, 1695MHz, 24GB GDDR6X@19.5Gbps, 936GB/s, 350W |
GeForce RTX 3080 Ti | $463 ($1,200) | 73.2 | 0.158 | 38.4 | 65.8 | 89.1 | 127.2 | 353.2 | 0.207 | GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W |
GeForce RTX 3080 10GB | $345 ($700) | 62.6 | 0.181 | 29.0 | 57.7 | 79.7 | 115.3 | 294.7 | 0.212 | GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W |
GeForce RTX 3070 Ti | $281 ($600) | 45.0 | 0.160 | 17.0 | 38.0 | 62.2 | 101.8 | 264.1 | 0.170 | GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W |
GeForce RTX 3070 | $234 ($500) | 41.8 | 0.179 | 15.7 | 35.1 | 58.2 | 95.5 | 203.4 | 0.206 | GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W |
GeForce RTX 3060 Ti | $223 ($400) | 37.6 | 0.169 | 14.1 | 31.5 | 52.3 | 86.4 | 195.9 | 0.192 | GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W |
GeForce RTX 3060 12GB | $229 ($330) | 34.7 | 0.151 | 16.5 | 30.4 | 43.6 | 65.9 | 161.5 | 0.215 | GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W |
GeForce RTX 3050 8GB | $152 ($250) | 22.2 | 0.146 | 18.8 | 28.4 | 48.3 | 122.1 | 0.182 | GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W |
GPU Testbed | |
---|---|
AMD Ryzen 7 9800X3D CPU | |
Asus ROG Crosshair 870E Hero | |
G.Skill 2x16GB DDR5 | |
Crucial T705 4TB SSD | |
Corsair HX1500i PSU | |
Cooler Master 280mm AIO |
You'll notice the lack of 4K results on the RTX 3050, and the reasoning there should be obvious. Several games wouldn't even run at all at those settings, and most of the others would be well into the low single digit FPS range. What's interesting here is how the RTX 3060 12GB actually outpaces the RTX 3060 Ti at 4K, and basically matches it at 1440p ultra. When Nvidia claims the new RTX 50-series cards like the RTX 5060 Ti 8GB, RTX 5060 8GB, and RTX 5050 8GB have "enough" VRAM, it's full of poop. Period. Anyway, let's dissect things a bit.
The RTX 3090 Ti obviously rates as the fastest of this generation, but it's not massively faster than the vanilla 3090 — it's about 10% faster overall. What's more, it was a cash grab from Nvidia, but it arrived just as the end of Ethereum cryptomining arrived, so the demand simply wasn't there anymore. Its only saving grace these days is that it has 24GB of VRAM, so it's still plenty potent for running certain AI workloads if you're into that sort of thing.
The RTX 3090 and RTX 3080 Ti have pretty similar specifications overall, with the only real difference being the doubling of VRAM on the 3090. It's only about 5% faster overall, but there are a few instances where the additional VRAM helps more, like in Spider-Man 2. That game doesn't necessarily require more than 12GB to run well, but it benefits even at 1080p and 1440p. Final Fantasy XVI also likes having more VRAM, and Cyberpunk 2077 exhibits far better 1% lows than on the 3080 Ti.
At the time the initial RTX 3080 10GB launched, there were already complaints about the VRAM situation. It wasn't dire at the time, but five years later the lack of memory certainly hinders performance, especially at max settings and resolutions. The 3080 Ti ends up about 12% faster than the 3080 10GB at 1080p and 1440p, but then the gap pole-vaults to 33% at 4K ultra. So yeah, Nvidia, 10GB these days has become a serious liability, and DLSS upscaling only gets you so far.
Below that, we enter the domain of the 8GB cards, with the 3060 12GB being the standout exception. That's still one of the best overall GPUs Nvidia has released in quite a long time, with sufficient VRAM and performance coupled to an attractive price. There's a reason that remains one of the most popular GPUs on the Steam Hardware Survey.
The RTX 3080 10GB still has plenty of memory bandwidth, thanks to the 320-bit interface, plus 10GB does help relative to 8GB. The 3080 beats the 3070 Ti by just 13% at 1080p medium, but that increases to 28% at 1080p ultra, and a whopping 52% at 1440p ultra. 4K ultra is basically out of the picture on either card, at least in quite a few of the games I tested, but the gap there ends up 70% in favor of the 3080.
The 3070 Ti and 3070 end up a lot closer, since both are saddled with the same 8GB VRAM. It's only about an 8% lead for the Ti. The 3070 in turn delivers about 11% more performance than the 3060 Ti, before we get to the RTX 3060 12GB.
At 1080p medium, the 3060 Ti ends up being 31% faster than the 3060. That's thanks to having way more compute, plus more memory bandwidth. But even at 1080p ultra, the VRAM limitations already start to come into play and the gap shrinks to 20%. Considering these were originally $400 vs $330, that's about in line with the price difference. But then at 1440p ultra, it's only a 4% lead for the 3060 Ti, and finally at 4K ultra the 3060 12GB delivers 17% higher performance. That's purely due to the VRAM. And it's not even a pyrrhic victory, as there are games like Stalker 2 where the 3060 12GB manages 30 FPS at 1440p, compared to a stuttering 7 FPS on the 3060 Ti.
Wrapping things up, the RTX 3050 8GB mostly was interesting at launch because you could find them for closer to MSRP. 8GB was "enough" for a $250 graphics card, at least in 2021, and four years later it's still somewhat tolerable if the GPU is cheap enough. But the 3050 8GB was often slower than the 2060 6GB due to having less compute and less bandwidth. That's a topic for another day, however, once I've finished retesting the Turing generation. (Spoiler: 6GB of VRAM does not hold up well these days!)
Here are the charts for the overall performance tests.
No comments:
Post a Comment