As an Amazon Associate I earn money from qualifying purchases.

Friday, October 10, 2025

NVIDIA GeForce RTX 5050 review: Still 8GB so it needs DLSS to try to save the day

NVIDIA's minimum entry point for Blackwell desktops, the RTX 5050 lags behind the 5060 given the performance/price ratio.

NVIDIA GeForce RTX 5050 Introduction

NVIDIA launched the RTX 5050 on July 1, 2025. It’s an interesting card because we didn’t have an RTX 4050, but we had a 3050. We didn’t have an RTX 2050, but we had a GTX 1650. And we also had the GTX 1050 if you go way back when, and GTX 950 and 750 back in the day. But this is the first true budget desktop GPU from Team Green in four years. How does it fare? We’ll get to that in a moment. First, let’s talk specs and how the 5050 stacks up on paper.

This is where things get a little interesting. So this is a $250 card. The step up to the 5060 is $50 more — that’s a 20% increase in price. The 5050 comes with 20 streaming multiprocessors (SMs), and you get 128 CUDA cores per SM, so 2,560 CUDA cores versus 30 SMs on the 5060 — a 50% increase.

Now, the 5050 does have slightly higher clock speeds, and the ASUS card, like I said, is factory overclocked, but it’s not going to close a 50% gap. It’s going to maybe make it to a 40–45% gap on compute. The RTX 5050 also has GDDR6 memory running at 20 GT/s. The 5060 and above all use GDDR7 memory running at 28 GT/s (actually 30 on the RTX 5080, but that’s a different story).

So that’s almost 50% more theoretical performance for the RTX 5060 is what we’re really talking about — probably 40% in a lot of situations, for 20% more money. That right there makes the RTX 5050 a non-starter in my opinion. And yet I bought one because, hey, let’s see how it performs, right? I did pay my own money for this card. Thanks, ASUS and NVIDIA — you didn’t send it to me, and I get to say whatever I want.

Wednesday, October 8, 2025

NVIDIA GeForce RTX 5060 8GB review: 8GB just isn’t enough for a modern $300+ graphics card

The RTX 5060 runs well enough when it doesn't run out of VRAM, basically providing a horizontal shift from the RTX 4060 Ti 8GB for $100 less (two years later).

GPU Testbed
Asus RTX 5060 Dual OC
AMD Ryzen 7 9800X3D CPU
Asus ROG Crosshair 870E Hero
G.Skill 2x16GB DDR5
Crucial T705 4TB SSD
Corsair HX1500i PSU
Cooler Master 280mm AIO

The NVIDIA GeForce RTX 5060, with 8GB of VRAM, lands at the $299 price point, with factory overclocked cards pushing prices slightly higher. It replaces the RTX 4060 8GB in the product stack, with largely similar specifications that we'll get to in a moment. Built on the Blackwell architecture, the card promises higher performance than its predecessor, with GDDR7 memory boosting bandwidth by an impressive 65%. Still, 8GB of memory on a card that costs over $300 is a concern, much bigger now than when the 4060 launched — and it was a problem then as well!

We’ve been seeing Nvidia GPUs with 8GB of VRAM since the GTX 1070 in 2016. That was a $379 card at launch, which would be about $500 in today's money... but we do expect a certain element of computer components getting faster for the same price. Regardless, the VRAM is a potential sticking point. 1080p medium to high settings should work well enough, but maxed out settings, or even high settings with upscaling and frame generation, could push beyond the GPU's VRAM capacity. 8GB of VRAM simply doesn’t cut it anymore for a “new” GPU launching in 2025.

We’ve benchmarked the ASUS Dual RTX 5060, comparing it to a variety of current and previous generation GPUs, across our full suite of modern titles at 1080p, 1440p, and 4K. The short version? It’s generally a solid performer for 1080p gaming, but like its predecessors, the limited memory capacity becomes a serious concern the moment you crank up the settings. Indiana Jones and the Great Circle for instance flat-out refused to run on 8GB NVIDIA cards, even though AMD and Intel GPUs with 8GB managed to give it a go. In short, it's NVIDIA doing its best to marginalize the lower tier GPUs.

Monday, September 8, 2025

RTX 5090 vs 4090 vs 3090/Ti vs Titan RTX: Comparing Four Generations of Nvidia's Fastest GPUs

How do Nvidia's fastest GPUs of the past four generations stack up, seven years after the Titan RTX launch?

Seven years ago, Nvidia released the Titan RTX — the last of the Titans, with the xx90 series GPUs inheriting the crown. People like to complain about how expensive the halo GPUs are, but it's nothing new. The Titan RTX launched at $2,499, which no GeForce card has ever (officially) matched. Of course, it offered some other extras, like improved professional graphics support. The Titan RTX might seem like a terrible deal compared to the RTX 2080 Ti, at twice the price for a minor performance bump and more than double the VRAM, but compared to the Quadro RTX 8000 and Quadro RTX 6000, it was one-third the cost with most of the other features intact.

I digress. The halo GPUs from Nvidia have long since ceased to compete as a value proposition, though arguably the 5090 and 4090 are better "values" than the step down 5080 and 4080 for the most recent generations. We've put together full GPU performance hierarchies for the Blackwell RTX 50-series, Ada Lovelace RTX 40-series GPUs, and Ampere RTX 30-series, and we're working on a Turing RTX 20-series hierarchy. But in the meantime, with testing of the Titan RTX complete, we wanted to look at these top-tier GPUs to see the progression over the past seven years.

Sunday, September 7, 2025

Nvidia GeForce RTX 50-Series Graphics Card Performance Hierarchy

The Nvidia Blackwell architecture mostly rehashes Ada, using the same process node. Only the RTX 5090 stands out as a major (-ly expensive) upgrade.


Welcome to the modern era of Nvidia graphics cards, courtesy of the Blackwell architecture. Except, if we're being honest here — unlike Nvidia — not a whole helluvalot has changed architecturally with Blackwell relative to Ada Lovelace. We can sum up the major upgrades quite quickly.

First, Blackwell has native FP4 support on the tensor cores, which as of yet has only been used in a handful of applications, like a special version of AI image generation using Flux built into a special version of UL's Procyon benchmark. Blackwell also offers native FP6 support, which is sort of a hybrid between FP4 and FP8 that can potentially reduce memory requirements, but our understanding is that it's not really any faster than just using native FP8 operations.

Blackwell does offer some new features for ray tracing applications, but as with all other ray tracing tools, developer uptake can often be quite slow, particularly in regards to new games using the features. An enhanced triangle cluster intersection engine allows Mega Geometry (a new buzzword from Nvidia!) to better render massively complex scenes without bogging down. It feels a bit to me like the old over-tessellation approach where Nvidia got some games (like Crysis 2 or 3, IIRC) to utilize massive amounts of tesselation... on flat surfaces! Why? Because AMD's GPUs struggled mightily with the workload so it made Nvidia's GPUs look better.

Friday, September 5, 2025

Nvidia GeForce RTX 40-Series Graphics Card Performance Hierarchy

Ada Lovelace delivered massive efficiency improvments, but needed the framegen crutch to really "boost" performance


Everyone was looking forward to the Nvidia GeForce RTX 40-series graphics cards. After two horribly long years of shortages caused by cryptomining, along with the influenece of the Covid pandement, GPU prices and availability had finally started to settle down by the summer of 2022. Ethereum mining was dead, and rumors were that Nvidia was pulling out all the stops with its next-generation Ada Lovelace architecture. Smaller lithography courtesy of TSMC's cutting-edge 4N (4nm Nvidia) process node would leapfrog what AMD was planning. Everything seemed like it would be great.

When the RTX 4090 arrived on October 12, it made no excuses. It offered a substantial 50–75 percent performance improvement over the previous generation RTX 3090, only cost $100 more, and while the supply was definitely constrained, at least cryptominers weren't snapping up every GPU. The RTX 4080 followed a month later, on November 16, but by then things were turning sour. Nvidia had pre-announced the 4080 16GB and 4080 12GB variants, but the two cards had virtually nothing in common, and people were pissed. Also, prices were seemingly influenced by the cryptoboom, even though that had ended. $1,199 for the 4080 took the overpriced 3080 Ti's place, while the 4080 12GB was going to cost $899 while offering far less in the specs and performance departments.

Well, one thing lead to another and Nvidia axed the 4080 12GB name and changed that to the 4070 Ti — with a $100 price cut. But it was still clear that the rest of the RTX 40-series cards wouldn't quite manage to live up to the lofty results posted by the 4090. The 4080 was certainly faster than the 3080, by 50–70 percent, depending on the game and settings used. But it was also 70% more expensive than its predecessor. Alternatively, it was about 35% faster than the RTX 3080 Ti, which was still a pretty decent result.

Saturday, August 30, 2025

Nvidia GeForce RTX 30-Series Graphics Card Performance Hierarchy

How do Nvidia's Ampere RTX 30-series GPUs stack up five years later?


Nvidia launched its GeForce RTX 30-series GPUs with the RTX 3080 10GB card on September 17, 2020 — very nearly five years ago as I write this. It followed that with the RTX 3090 one week later, and then the RTX 3070, RTX 3060 Ti, RTX 3060 12GB, and RTX 3050 8GB over the following months. A bit less than one year after the initial launch, Nvidia then released the 3080 Ti and 3070 Ti, with the final RTX 3090 Ti coming in the spring of 2022.

The GPUs were all good on paper, for the most part, but at the time of launch virtually every one of these graphics cards ended up being a massive disappointment for gamers. Pardon me if I'm dredging up old memories that might still cause PTSD, but late 2020 through early 2022 was a perfect storm of awfulness in the graphics card industry. Ethereum mining was massively profitable during portions of that time, to the point where miners were scooping up every viable GPU and were often willing to pay over triple the MSRPs. On top of that, we had the Covid pandemic causing more people to work from home — or stay home to play games — and plenty of folks were upgrading their PCs. Massive GPU shortages ensued.

But now all of that is past, and we have had two new generations of Nvidia GPUs in the interim, the RTX 40-series using the Ada Lovelace architecture arrived beginning in the fall of 2022, and then the RTX 50-series with the Blackwell architecture launched at the start of 2025. You wouldn't necessarily go out and buy a new RTX 30-series GPU today, but how do these older generation GPUs compare to the modern stuff?

Wednesday, August 27, 2025

Introducing the 2025 GPU Hierarchy Testbed

Every benchmark suite begins with a selection of the appropriate testbed. For the GPU hierarchy, we want the fastest possible system to go with our graphics cards, ensuring that nothing else holds the GPU back — as much as we can, at least. The reality is that lower settings and resolutions are less demanding, so 1080p testing in particular simply won't allow the fastest cards to reach their full potential. Faster CPUs routinely come out, but often the gains are only 5–10 percent, so we want to make the most of what's available. Which brings us to our component choices.

GPU Testbed
AMD Ryzen 7 9800X3D CPU
Asus ROG Crosshair 870E Hero
G.Skill 2x16GB DDR5
Crucial T705 4TB SSD
Corsair HX1500i PSU
Cooler Master 280mm AIO

Right now, the crown for the fastest CPU for gaming goes to the AMD Ryzen 7 9800X3D. It's "only" an 8-core, 16-thread CPU, but the number of games that truly push more than eight CPU-heavy threads can probably be counted on one hand. More importantly, having a large 64MB L3 cache stacked on top of the existing 32MB L3 cache — that's the "X3D" part of the model name — proves extremely beneficial for a lot of games. Not everything benefits to the same degree, but overall the 9800X3D generally outpaces the more expensive 16-core, 32-thread Ryzen 9 9950X3D. That's because the 9950X3D only has the extra L3 cache on one of the two 8-core chiplets, and the extra traffic between the various chips works against the higher core counts in most games. It certainly helps that the 9800X3D costs $320 less.

Tuesday, August 26, 2025

Welcome to the GPU Hierarchy


Hey there! If you're a long-time reader of my blog (all ten of you), you're going to see a name change. I've rebranded as The GPU Hierarchy, and testing of graphics card performance will be my primary goal going forward.

What started out as a blog about cryptocurrency mining WAAAY back in the day has morphed quite a bit. I haven't done any mining in years, as it has become generally unprofitable — especially considering the upfront hardware costs — but I do know a lot about graphics cards. If you know who I am and my employment history, that shouldn't be a surprise, but I'm going to try to stay mostly anonymous here.

Yes, it's been about two years since I wrote anything here. My full-time job was keeping me very busy. Now I've got a bit more freetime, so I'm going to put that to good use.

I've been testing (and retesting...) all of the modern graphics card for a while, and I'm going to start publishing a full suite of performance results. We'll have tables and charts of performance data, along with power, efficiency, and other metrics. Everything will be linked to Amazon listings (or at least a search of Amazon listings), which helps support the site. But more importantly, I want this to become a great resource for people looking to purchase or upgrade their gaming GPU.

I've assembled a test suite of 15 reasonably modern games, three of which have ray tracing effects enabled. That's 20% of games with RT enabled, and I feel that's probably about as much weight as ray tracing deserves. Upscaling and frame generation techniques will be left off, because I view those as performance enhancements rather than baseline measurements.

To be clear, I routinely enable DLSS and FSR when gaming, but fundamentally those differ in appearance — with XeSS being the red-headed stepchild that differs yet again. Plus we now have DLSS 2/3/4, FSR 2/3/4, and XeSS 1.x/2.x as options, all of which look and perform differently! Perhaps that's a story for another day and some deeper investigations, but for now we'll stick with reference performance.

So, welcome back if you've been here before (when it was hosted at HolyNerdvana). If you're new, welcome to my graphics card blog and site. I'm a veteran of the GPU industry, having tested and reviewed a variety of hardware for over two decades. I know a lot about GPUs and graphics cards, I have a variety of opinions, and this is where I'll be sharing them now.

This is a fully independent website, meaning there's no big publisher telling me what to write, when to write it, and how often I should spend a long weekend looking for BS Black Friday, Prime Day, Labor Day, etc. deals. Just the straight stuff here. I hope you find it useful, and comments are welcome. Note also the general lack of advertising, which should hopefully mean the pages load quickly. If things go well, maybe I'll get some sponsorships, but I hope to keep such things to a minimum — a throwback to the good old days of the web.